From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-24.8 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_SANE_1,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5C103C433E0 for ; Tue, 2 Feb 2021 16:29:11 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2097B64F74 for ; Tue, 2 Feb 2021 16:29:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233584AbhBBQ3J (ORCPT ); Tue, 2 Feb 2021 11:29:09 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52554 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235274AbhBBQ0h (ORCPT ); Tue, 2 Feb 2021 11:26:37 -0500 Received: from mail-wr1-x434.google.com (mail-wr1-x434.google.com [IPv6:2a00:1450:4864:20::434]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 478C8C06178A for ; Tue, 2 Feb 2021 08:25:55 -0800 (PST) Received: by mail-wr1-x434.google.com with SMTP id v15so21151268wrx.4 for ; Tue, 02 Feb 2021 08:25:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to:user-agent; bh=+Tllhc+5c0Pfht0qAuJDHzixy31AJS/JnPan2WRpzvQ=; b=RqAcaM9YecY2waqMIfSfA+rVg83x1vQ1KtzrriKCrCQ5wyv+51jCevVxWgtZQDjmsS 62o9oQWsFv0nl4bk9Bg+3vvuuZ3Nk1gyWDfYZ07ycVX0mV3NzjeQOlSOFVgR94B1kJMS tPfQKesW7i1nQCCint25upyt3g/DuRoZGyQQ0EJFfSRFsBGJj+QbVv0MzIFdRlIECb9R B8hga8aSpYduc7ifoC87FFb+zSBHxyTF17elbV4jaiKMcwnibdRkoIVC4YsOw8haIApO yieuQkbVuSHpT2KusaKiXdqE6aB8fi1KooddGYblRVRAiQLKQchB0NGhfbocbuU0TqfF ulEA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to:user-agent; bh=+Tllhc+5c0Pfht0qAuJDHzixy31AJS/JnPan2WRpzvQ=; b=tyOf2er9Mt25jPSS2P1k7RGQkgpMkGDVW/QtPU1zt5+PMGYCDs43FBNb4NMS1kBW3Y sLQ4OILRy7AhEz1A+sEx2Q8OOyjCjxSdllRhvdqdMXQAf/wpPoqNQeuFLgyUFM10NsTf HiqqhCa830WAQ/lOWB2p4gHdjbPbEyO732e1rwGPZl31qnHbTvp4dAsh/oSNWtnkk8lS fzjNR1TneiY7PsDI8DLMLVepXfEvXYoU7vZcjWRHmk9M5SOzQFjkuXF2BzaTDWZosTT6 O9QSwb4fbJOuM1VjtZf8uIfV4uhfnU9iTHe6zr1f0aFqsalJTWqAI3SEOUs1J/K2kvCZ 8Rig== X-Gm-Message-State: AOAM531pZd577Z+4+oY82dTdKlmDvpsIBJZsXVt5lR8mu8c0p7H4kyCC HTJIIrbNXKmADREQ3MByXeJALQ== X-Google-Smtp-Source: ABdhPJzsQNwd1wsYhyDrsGvsGsruJZE1/fS2G+m3FeIFgjgiLm0coLZXm4QRGJkliuWJV4js1w1qWg== X-Received: by 2002:a5d:6a45:: with SMTP id t5mr24294755wrw.252.1612283153748; Tue, 02 Feb 2021 08:25:53 -0800 (PST) Received: from elver.google.com ([2a00:79e0:15:13:f693:9fff:fef4:2449]) by smtp.gmail.com with ESMTPSA id c20sm3790013wmb.38.2021.02.02.08.25.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 02 Feb 2021 08:25:52 -0800 (PST) Date: Tue, 2 Feb 2021 17:25:47 +0100 From: Marco Elver To: Andrey Konovalov Cc: Catalin Marinas , Vincenzo Frascino , Dmitry Vyukov , Alexander Potapenko , Andrew Morton , Will Deacon , Andrey Ryabinin , Peter Collingbourne , Evgenii Stepanov , Branislav Rankov , Kevin Brodsky , kasan-dev@googlegroups.com, linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH 02/12] kasan, mm: optimize kmalloc poisoning Message-ID: References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/2.0.2 (2020-11-20) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Feb 01, 2021 at 08:43PM +0100, Andrey Konovalov wrote: > For allocations from kmalloc caches, kasan_kmalloc() always follows > kasan_slab_alloc(). Currenly, both of them unpoison the whole object, > which is unnecessary. > > This patch provides separate implementations for both annotations: > kasan_slab_alloc() unpoisons the whole object, and kasan_kmalloc() > only poisons the redzone. > > For generic KASAN, the redzone start might not be aligned to > KASAN_GRANULE_SIZE. Therefore, the poisoning is split in two parts: > kasan_poison_last_granule() poisons the unaligned part, and then > kasan_poison() poisons the rest. > > This patch also clarifies alignment guarantees of each of the poisoning > functions and drops the unnecessary round_up() call for redzone_end. > > With this change, the early SLUB cache annotation needs to be changed to > kasan_slab_alloc(), as kasan_kmalloc() doesn't unpoison objects now. > The number of poisoned bytes for objects in this cache stays the same, as > kmem_cache_node->object_size is equal to sizeof(struct kmem_cache_node). > > Signed-off-by: Andrey Konovalov > --- > mm/kasan/common.c | 93 +++++++++++++++++++++++++++++++---------------- > mm/kasan/kasan.h | 43 +++++++++++++++++++++- > mm/kasan/shadow.c | 28 +++++++------- > mm/slub.c | 3 +- > 4 files changed, 119 insertions(+), 48 deletions(-) > > diff --git a/mm/kasan/common.c b/mm/kasan/common.c > index 374049564ea3..128cb330ca73 100644 > --- a/mm/kasan/common.c > +++ b/mm/kasan/common.c > @@ -278,21 +278,11 @@ void __kasan_poison_object_data(struct kmem_cache *cache, void *object) > * based on objects indexes, so that objects that are next to each other > * get different tags. > */ > -static u8 assign_tag(struct kmem_cache *cache, const void *object, > - bool init, bool keep_tag) > +static u8 assign_tag(struct kmem_cache *cache, const void *object, bool init) > { > if (IS_ENABLED(CONFIG_KASAN_GENERIC)) > return 0xff; > > - /* > - * 1. When an object is kmalloc()'ed, two hooks are called: > - * kasan_slab_alloc() and kasan_kmalloc(). We assign the > - * tag only in the first one. > - * 2. We reuse the same tag for krealloc'ed objects. > - */ > - if (keep_tag) > - return get_tag(object); > - > /* > * If the cache neither has a constructor nor has SLAB_TYPESAFE_BY_RCU > * set, assign a tag when the object is being allocated (init == false). > @@ -325,7 +315,7 @@ void * __must_check __kasan_init_slab_obj(struct kmem_cache *cache, > } > > /* Tag is ignored in set_tag() without CONFIG_KASAN_SW/HW_TAGS */ > - object = set_tag(object, assign_tag(cache, object, true, false)); > + object = set_tag(object, assign_tag(cache, object, true)); > > return (void *)object; > } > @@ -413,12 +403,46 @@ static void set_alloc_info(struct kmem_cache *cache, void *object, > kasan_set_track(&alloc_meta->alloc_track, flags); > } > > +void * __must_check __kasan_slab_alloc(struct kmem_cache *cache, > + void *object, gfp_t flags) > +{ > + u8 tag; > + void *tagged_object; > + > + if (gfpflags_allow_blocking(flags)) > + kasan_quarantine_reduce(); > + > + if (unlikely(object == NULL)) > + return NULL; > + > + if (is_kfence_address(object)) > + return (void *)object; > + > + /* > + * Generate and assign random tag for tag-based modes. > + * Tag is ignored in set_tag() for the generic mode. > + */ > + tag = assign_tag(cache, object, false); > + tagged_object = set_tag(object, tag); > + > + /* > + * Unpoison the whole object. > + * For kmalloc() allocations, kasan_kmalloc() will do precise poisoning. > + */ > + kasan_unpoison(tagged_object, cache->object_size); > + > + /* Save alloc info (if possible) for non-kmalloc() allocations. */ > + if (kasan_stack_collection_enabled()) > + set_alloc_info(cache, (void *)object, flags, false); > + > + return tagged_object; > +} > + > static void *____kasan_kmalloc(struct kmem_cache *cache, const void *object, > - size_t size, gfp_t flags, bool kmalloc) > + size_t size, gfp_t flags) > { > unsigned long redzone_start; > unsigned long redzone_end; > - u8 tag; > > if (gfpflags_allow_blocking(flags)) > kasan_quarantine_reduce(); > @@ -429,33 +453,41 @@ static void *____kasan_kmalloc(struct kmem_cache *cache, const void *object, > if (is_kfence_address(kasan_reset_tag(object))) > return (void *)object; > > + /* > + * The object has already been unpoisoned by kasan_slab_alloc() for > + * kmalloc() or by ksize() for krealloc(). > + */ > + > + /* > + * The redzone has byte-level precision for the generic mode. > + * Partially poison the last object granule to cover the unaligned > + * part of the redzone. > + */ > + if (IS_ENABLED(CONFIG_KASAN_GENERIC)) > + kasan_poison_last_granule((void *)object, size); > + > + /* Poison the aligned part of the redzone. */ > redzone_start = round_up((unsigned long)(object + size), > KASAN_GRANULE_SIZE); > - redzone_end = round_up((unsigned long)object + cache->object_size, > - KASAN_GRANULE_SIZE); > - tag = assign_tag(cache, object, false, kmalloc); > - > - /* Tag is ignored in set_tag without CONFIG_KASAN_SW/HW_TAGS */ > - kasan_unpoison(set_tag(object, tag), size); > + redzone_end = (unsigned long)object + cache->object_size; > kasan_poison((void *)redzone_start, redzone_end - redzone_start, > KASAN_KMALLOC_REDZONE); > > + /* > + * Save alloc info (if possible) for kmalloc() allocations. > + * This also rewrites the alloc info when called from kasan_krealloc(). > + */ > if (kasan_stack_collection_enabled()) > - set_alloc_info(cache, (void *)object, flags, kmalloc); > + set_alloc_info(cache, (void *)object, flags, true); > > - return set_tag(object, tag); > -} > - > -void * __must_check __kasan_slab_alloc(struct kmem_cache *cache, > - void *object, gfp_t flags) > -{ > - return ____kasan_kmalloc(cache, object, cache->object_size, flags, false); > + /* Keep the tag that was set by kasan_slab_alloc(). */ > + return (void *)object; > } > > void * __must_check __kasan_kmalloc(struct kmem_cache *cache, const void *object, > size_t size, gfp_t flags) > { > - return ____kasan_kmalloc(cache, object, size, flags, true); > + return ____kasan_kmalloc(cache, object, size, flags); > } > EXPORT_SYMBOL(__kasan_kmalloc); > > @@ -496,8 +528,7 @@ void * __must_check __kasan_krealloc(const void *object, size_t size, gfp_t flag > if (unlikely(!PageSlab(page))) > return __kasan_kmalloc_large(object, size, flags); > else > - return ____kasan_kmalloc(page->slab_cache, object, size, > - flags, true); > + return ____kasan_kmalloc(page->slab_cache, object, size, flags); > } > > void __kasan_kfree_large(void *ptr, unsigned long ip) > diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h > index dd14e8870023..6a2882997f23 100644 > --- a/mm/kasan/kasan.h > +++ b/mm/kasan/kasan.h > @@ -358,12 +358,51 @@ static inline bool kasan_byte_accessible(const void *addr) > > #else /* CONFIG_KASAN_HW_TAGS */ > > -void kasan_poison(const void *address, size_t size, u8 value); > -void kasan_unpoison(const void *address, size_t size); > +/** > + * kasan_poison - mark the memory range as unaccessible > + * @addr - range start address, must be aligned to KASAN_GRANULE_SIZE > + * @size - range size > + * @value - value that's written to metadata for the range > + * > + * The size gets aligned to KASAN_GRANULE_SIZE before marking the range. > + */ > +void kasan_poison(const void *addr, size_t size, u8 value); > + > +/** > + * kasan_unpoison - mark the memory range as accessible > + * @addr - range start address, must be aligned to KASAN_GRANULE_SIZE > + * @size - range size > + * > + * For the tag-based modes, the @size gets aligned to KASAN_GRANULE_SIZE before > + * marking the range. > + * For the generic mode, the last granule of the memory range gets partially > + * unpoisoned based on the @size. > + */ > +void kasan_unpoison(const void *addr, size_t size); > + > bool kasan_byte_accessible(const void *addr); > > #endif /* CONFIG_KASAN_HW_TAGS */ > > +#ifdef CONFIG_KASAN_GENERIC > + > +/** > + * kasan_poison_last_granule - mark the last granule of the memory range as > + * unaccessible > + * @addr - range start address, must be aligned to KASAN_GRANULE_SIZE > + * @size - range size > + * > + * This function is only available for the generic mode, as it's the only mode > + * that has partially poisoned memory granules. > + */ > +void kasan_poison_last_granule(const void *address, size_t size); > + > +#else /* CONFIG_KASAN_GENERIC */ > + > +static inline void kasan_poison_last_granule(const void *address, size_t size) { } > + > +#endif /* CONFIG_KASAN_GENERIC */ > + > /* > * Exported functions for interfaces called from assembly or from generated > * code. Declarations here to avoid warning about missing declarations. > diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c > index 1372a2fc0ca9..1ed7817e4ee6 100644 > --- a/mm/kasan/shadow.c > +++ b/mm/kasan/shadow.c > @@ -69,10 +69,6 @@ void *memcpy(void *dest, const void *src, size_t len) > return __memcpy(dest, src, len); > } > > -/* > - * Poisons the shadow memory for 'size' bytes starting from 'addr'. > - * Memory addresses should be aligned to KASAN_GRANULE_SIZE. > - */ > void kasan_poison(const void *address, size_t size, u8 value) > { > void *shadow_start, *shadow_end; > @@ -83,12 +79,12 @@ void kasan_poison(const void *address, size_t size, u8 value) > * addresses to this function. > */ > address = kasan_reset_tag(address); > - size = round_up(size, KASAN_GRANULE_SIZE); > > /* Skip KFENCE memory if called explicitly outside of sl*b. */ > if (is_kfence_address(address)) > return; > > + size = round_up(size, KASAN_GRANULE_SIZE); > shadow_start = kasan_mem_to_shadow(address); > shadow_end = kasan_mem_to_shadow(address + size); > > @@ -96,6 +92,16 @@ void kasan_poison(const void *address, size_t size, u8 value) > } > EXPORT_SYMBOL(kasan_poison); > > +#ifdef CONFIG_KASAN_GENERIC > +void kasan_poison_last_granule(const void *address, size_t size) > +{ > + if (size & KASAN_GRANULE_MASK) { > + u8 *shadow = (u8 *)kasan_mem_to_shadow(address + size); > + *shadow = size & KASAN_GRANULE_MASK; > + } > +} > +#endif The function declaration still needs to exist in the dead branch if !IS_ENABLED(CONFIG_KASAN_GENERIC). It appears in that case it's declared (in kasan.h), but not defined. We shouldn't get linker errors because the optimizer should remove the dead branch. Nevertheless, is this code generally acceptable? > void kasan_unpoison(const void *address, size_t size) > { > u8 tag = get_tag(address); > @@ -115,16 +121,12 @@ void kasan_unpoison(const void *address, size_t size) > if (is_kfence_address(address)) > return; > > + /* Unpoison round_up(size, KASAN_GRANULE_SIZE) bytes. */ > kasan_poison(address, size, tag); > > - if (size & KASAN_GRANULE_MASK) { > - u8 *shadow = (u8 *)kasan_mem_to_shadow(address + size); > - > - if (IS_ENABLED(CONFIG_KASAN_SW_TAGS)) > - *shadow = tag; > - else /* CONFIG_KASAN_GENERIC */ > - *shadow = size & KASAN_GRANULE_MASK; > - } > + /* Partially poison the last granule for the generic mode. */ > + if (IS_ENABLED(CONFIG_KASAN_GENERIC)) > + kasan_poison_last_granule(address, size); > } > > #ifdef CONFIG_MEMORY_HOTPLUG > diff --git a/mm/slub.c b/mm/slub.c > index 176b1cb0d006..e564008c2329 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -3565,8 +3565,7 @@ static void early_kmem_cache_node_alloc(int node) > init_object(kmem_cache_node, n, SLUB_RED_ACTIVE); > init_tracking(kmem_cache_node, n); > #endif > - n = kasan_kmalloc(kmem_cache_node, n, sizeof(struct kmem_cache_node), > - GFP_KERNEL); > + n = kasan_slab_alloc(kmem_cache_node, n, GFP_KERNEL); > page->freelist = get_freepointer(kmem_cache_node, n); > page->inuse = 1; > page->frozen = 0; > -- > 2.30.0.365.g02bc693789-goog > From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.6 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3D3F1C433DB for ; Tue, 2 Feb 2021 16:27:32 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id CAE0964F74 for ; Tue, 2 Feb 2021 16:27:31 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org CAE0964F74 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References:Message-ID: Subject:To:From:Date:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=2/dyekwZWHaKi+d+B5qV6pqApkE5YB7FDXRcZvtX9mU=; b=aOxvlpKF/7GbhgNDcD004ByAg z34TEyMlwF35iDczuD+rV9ZR7+V781ZUsWsNS6ppzHcavsm7sv3XjFDoxXSFML10T+Vagn5+rrFpr zTiapIro3SEIU6ogu5f0TwhgWE0RpZ9RWfnRjDCANw6fyPLFWHuWI/fIm6goKTxAf3AZ7ctIArYnx uB4grPphv9kztugbTNW3lIgMYl7YbAu6o545WlG3aXZ3w5ThXR+htUiPfcRfH4YwOI4E3L42ROwlU 8+ZuDy13ZV7clGrroKsGkqtOT8KeTobD8S9AyJn3AryUlFSxjQLeaf7S246WIk2WIiAAMqWp+m+0E oRaDZ1zyw==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1l6yV9-0004WW-2B; Tue, 02 Feb 2021 16:26:03 +0000 Received: from mail-wr1-x42f.google.com ([2a00:1450:4864:20::42f]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1l6yV3-0004VW-02 for linux-arm-kernel@lists.infradead.org; Tue, 02 Feb 2021 16:25:58 +0000 Received: by mail-wr1-x42f.google.com with SMTP id c4so18407097wru.9 for ; Tue, 02 Feb 2021 08:25:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to:user-agent; bh=+Tllhc+5c0Pfht0qAuJDHzixy31AJS/JnPan2WRpzvQ=; b=RqAcaM9YecY2waqMIfSfA+rVg83x1vQ1KtzrriKCrCQ5wyv+51jCevVxWgtZQDjmsS 62o9oQWsFv0nl4bk9Bg+3vvuuZ3Nk1gyWDfYZ07ycVX0mV3NzjeQOlSOFVgR94B1kJMS tPfQKesW7i1nQCCint25upyt3g/DuRoZGyQQ0EJFfSRFsBGJj+QbVv0MzIFdRlIECb9R B8hga8aSpYduc7ifoC87FFb+zSBHxyTF17elbV4jaiKMcwnibdRkoIVC4YsOw8haIApO yieuQkbVuSHpT2KusaKiXdqE6aB8fi1KooddGYblRVRAiQLKQchB0NGhfbocbuU0TqfF ulEA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to:user-agent; bh=+Tllhc+5c0Pfht0qAuJDHzixy31AJS/JnPan2WRpzvQ=; b=rygmU4WsrJQQUkBPV4mqEJ8gjqsWo16BgnjBaZ4eiV22qGzUtATFZmycmdagol4SbG 8Fecxa3xwDU0lv3Px4gCv3ehmHEE/Mxf08nz9Nt6lr95Yn8AcRPmtk6H8pmv3wF9QfiI NHupzYzqRbcWtb9x3Sj4YwNPgYR5BhQRT45819++3tNosQPRjLh0KqoDFn0qpmZOCKoh NNCIF9IoC5uZCfiTYCcH4x/aWZaJQTmvNdwMa9Teq/qjaZ0sjMM7gQOK7xQyKSb7EAg7 RCLtmYGOGSvDgAoTfvfXZJMhQvLbenhxRJy3Y3MvqbtmWprgewvgDpOn4CGaqlcO7KAd tySQ== X-Gm-Message-State: AOAM533BItl3wScX+U0iQgA9ORB3t5mTh5HcWzZ8EMMx6zxOAomnLm/O Uu28iEwRu7h0k7ah3CTFJ92Lwg== X-Google-Smtp-Source: ABdhPJzsQNwd1wsYhyDrsGvsGsruJZE1/fS2G+m3FeIFgjgiLm0coLZXm4QRGJkliuWJV4js1w1qWg== X-Received: by 2002:a5d:6a45:: with SMTP id t5mr24294755wrw.252.1612283153748; Tue, 02 Feb 2021 08:25:53 -0800 (PST) Received: from elver.google.com ([2a00:79e0:15:13:f693:9fff:fef4:2449]) by smtp.gmail.com with ESMTPSA id c20sm3790013wmb.38.2021.02.02.08.25.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 02 Feb 2021 08:25:52 -0800 (PST) Date: Tue, 2 Feb 2021 17:25:47 +0100 From: Marco Elver To: Andrey Konovalov Subject: Re: [PATCH 02/12] kasan, mm: optimize kmalloc poisoning Message-ID: References: MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: User-Agent: Mutt/2.0.2 (2020-11-20) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210202_112557_209612_B22585B9 X-CRM114-Status: GOOD ( 39.95 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-arm-kernel@lists.infradead.org, Branislav Rankov , Catalin Marinas , Kevin Brodsky , Will Deacon , linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, linux-mm@kvack.org, Alexander Potapenko , Evgenii Stepanov , Andrey Ryabinin , Andrew Morton , Vincenzo Frascino , Peter Collingbourne , Dmitry Vyukov Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Mon, Feb 01, 2021 at 08:43PM +0100, Andrey Konovalov wrote: > For allocations from kmalloc caches, kasan_kmalloc() always follows > kasan_slab_alloc(). Currenly, both of them unpoison the whole object, > which is unnecessary. > > This patch provides separate implementations for both annotations: > kasan_slab_alloc() unpoisons the whole object, and kasan_kmalloc() > only poisons the redzone. > > For generic KASAN, the redzone start might not be aligned to > KASAN_GRANULE_SIZE. Therefore, the poisoning is split in two parts: > kasan_poison_last_granule() poisons the unaligned part, and then > kasan_poison() poisons the rest. > > This patch also clarifies alignment guarantees of each of the poisoning > functions and drops the unnecessary round_up() call for redzone_end. > > With this change, the early SLUB cache annotation needs to be changed to > kasan_slab_alloc(), as kasan_kmalloc() doesn't unpoison objects now. > The number of poisoned bytes for objects in this cache stays the same, as > kmem_cache_node->object_size is equal to sizeof(struct kmem_cache_node). > > Signed-off-by: Andrey Konovalov > --- > mm/kasan/common.c | 93 +++++++++++++++++++++++++++++++---------------- > mm/kasan/kasan.h | 43 +++++++++++++++++++++- > mm/kasan/shadow.c | 28 +++++++------- > mm/slub.c | 3 +- > 4 files changed, 119 insertions(+), 48 deletions(-) > > diff --git a/mm/kasan/common.c b/mm/kasan/common.c > index 374049564ea3..128cb330ca73 100644 > --- a/mm/kasan/common.c > +++ b/mm/kasan/common.c > @@ -278,21 +278,11 @@ void __kasan_poison_object_data(struct kmem_cache *cache, void *object) > * based on objects indexes, so that objects that are next to each other > * get different tags. > */ > -static u8 assign_tag(struct kmem_cache *cache, const void *object, > - bool init, bool keep_tag) > +static u8 assign_tag(struct kmem_cache *cache, const void *object, bool init) > { > if (IS_ENABLED(CONFIG_KASAN_GENERIC)) > return 0xff; > > - /* > - * 1. When an object is kmalloc()'ed, two hooks are called: > - * kasan_slab_alloc() and kasan_kmalloc(). We assign the > - * tag only in the first one. > - * 2. We reuse the same tag for krealloc'ed objects. > - */ > - if (keep_tag) > - return get_tag(object); > - > /* > * If the cache neither has a constructor nor has SLAB_TYPESAFE_BY_RCU > * set, assign a tag when the object is being allocated (init == false). > @@ -325,7 +315,7 @@ void * __must_check __kasan_init_slab_obj(struct kmem_cache *cache, > } > > /* Tag is ignored in set_tag() without CONFIG_KASAN_SW/HW_TAGS */ > - object = set_tag(object, assign_tag(cache, object, true, false)); > + object = set_tag(object, assign_tag(cache, object, true)); > > return (void *)object; > } > @@ -413,12 +403,46 @@ static void set_alloc_info(struct kmem_cache *cache, void *object, > kasan_set_track(&alloc_meta->alloc_track, flags); > } > > +void * __must_check __kasan_slab_alloc(struct kmem_cache *cache, > + void *object, gfp_t flags) > +{ > + u8 tag; > + void *tagged_object; > + > + if (gfpflags_allow_blocking(flags)) > + kasan_quarantine_reduce(); > + > + if (unlikely(object == NULL)) > + return NULL; > + > + if (is_kfence_address(object)) > + return (void *)object; > + > + /* > + * Generate and assign random tag for tag-based modes. > + * Tag is ignored in set_tag() for the generic mode. > + */ > + tag = assign_tag(cache, object, false); > + tagged_object = set_tag(object, tag); > + > + /* > + * Unpoison the whole object. > + * For kmalloc() allocations, kasan_kmalloc() will do precise poisoning. > + */ > + kasan_unpoison(tagged_object, cache->object_size); > + > + /* Save alloc info (if possible) for non-kmalloc() allocations. */ > + if (kasan_stack_collection_enabled()) > + set_alloc_info(cache, (void *)object, flags, false); > + > + return tagged_object; > +} > + > static void *____kasan_kmalloc(struct kmem_cache *cache, const void *object, > - size_t size, gfp_t flags, bool kmalloc) > + size_t size, gfp_t flags) > { > unsigned long redzone_start; > unsigned long redzone_end; > - u8 tag; > > if (gfpflags_allow_blocking(flags)) > kasan_quarantine_reduce(); > @@ -429,33 +453,41 @@ static void *____kasan_kmalloc(struct kmem_cache *cache, const void *object, > if (is_kfence_address(kasan_reset_tag(object))) > return (void *)object; > > + /* > + * The object has already been unpoisoned by kasan_slab_alloc() for > + * kmalloc() or by ksize() for krealloc(). > + */ > + > + /* > + * The redzone has byte-level precision for the generic mode. > + * Partially poison the last object granule to cover the unaligned > + * part of the redzone. > + */ > + if (IS_ENABLED(CONFIG_KASAN_GENERIC)) > + kasan_poison_last_granule((void *)object, size); > + > + /* Poison the aligned part of the redzone. */ > redzone_start = round_up((unsigned long)(object + size), > KASAN_GRANULE_SIZE); > - redzone_end = round_up((unsigned long)object + cache->object_size, > - KASAN_GRANULE_SIZE); > - tag = assign_tag(cache, object, false, kmalloc); > - > - /* Tag is ignored in set_tag without CONFIG_KASAN_SW/HW_TAGS */ > - kasan_unpoison(set_tag(object, tag), size); > + redzone_end = (unsigned long)object + cache->object_size; > kasan_poison((void *)redzone_start, redzone_end - redzone_start, > KASAN_KMALLOC_REDZONE); > > + /* > + * Save alloc info (if possible) for kmalloc() allocations. > + * This also rewrites the alloc info when called from kasan_krealloc(). > + */ > if (kasan_stack_collection_enabled()) > - set_alloc_info(cache, (void *)object, flags, kmalloc); > + set_alloc_info(cache, (void *)object, flags, true); > > - return set_tag(object, tag); > -} > - > -void * __must_check __kasan_slab_alloc(struct kmem_cache *cache, > - void *object, gfp_t flags) > -{ > - return ____kasan_kmalloc(cache, object, cache->object_size, flags, false); > + /* Keep the tag that was set by kasan_slab_alloc(). */ > + return (void *)object; > } > > void * __must_check __kasan_kmalloc(struct kmem_cache *cache, const void *object, > size_t size, gfp_t flags) > { > - return ____kasan_kmalloc(cache, object, size, flags, true); > + return ____kasan_kmalloc(cache, object, size, flags); > } > EXPORT_SYMBOL(__kasan_kmalloc); > > @@ -496,8 +528,7 @@ void * __must_check __kasan_krealloc(const void *object, size_t size, gfp_t flag > if (unlikely(!PageSlab(page))) > return __kasan_kmalloc_large(object, size, flags); > else > - return ____kasan_kmalloc(page->slab_cache, object, size, > - flags, true); > + return ____kasan_kmalloc(page->slab_cache, object, size, flags); > } > > void __kasan_kfree_large(void *ptr, unsigned long ip) > diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h > index dd14e8870023..6a2882997f23 100644 > --- a/mm/kasan/kasan.h > +++ b/mm/kasan/kasan.h > @@ -358,12 +358,51 @@ static inline bool kasan_byte_accessible(const void *addr) > > #else /* CONFIG_KASAN_HW_TAGS */ > > -void kasan_poison(const void *address, size_t size, u8 value); > -void kasan_unpoison(const void *address, size_t size); > +/** > + * kasan_poison - mark the memory range as unaccessible > + * @addr - range start address, must be aligned to KASAN_GRANULE_SIZE > + * @size - range size > + * @value - value that's written to metadata for the range > + * > + * The size gets aligned to KASAN_GRANULE_SIZE before marking the range. > + */ > +void kasan_poison(const void *addr, size_t size, u8 value); > + > +/** > + * kasan_unpoison - mark the memory range as accessible > + * @addr - range start address, must be aligned to KASAN_GRANULE_SIZE > + * @size - range size > + * > + * For the tag-based modes, the @size gets aligned to KASAN_GRANULE_SIZE before > + * marking the range. > + * For the generic mode, the last granule of the memory range gets partially > + * unpoisoned based on the @size. > + */ > +void kasan_unpoison(const void *addr, size_t size); > + > bool kasan_byte_accessible(const void *addr); > > #endif /* CONFIG_KASAN_HW_TAGS */ > > +#ifdef CONFIG_KASAN_GENERIC > + > +/** > + * kasan_poison_last_granule - mark the last granule of the memory range as > + * unaccessible > + * @addr - range start address, must be aligned to KASAN_GRANULE_SIZE > + * @size - range size > + * > + * This function is only available for the generic mode, as it's the only mode > + * that has partially poisoned memory granules. > + */ > +void kasan_poison_last_granule(const void *address, size_t size); > + > +#else /* CONFIG_KASAN_GENERIC */ > + > +static inline void kasan_poison_last_granule(const void *address, size_t size) { } > + > +#endif /* CONFIG_KASAN_GENERIC */ > + > /* > * Exported functions for interfaces called from assembly or from generated > * code. Declarations here to avoid warning about missing declarations. > diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c > index 1372a2fc0ca9..1ed7817e4ee6 100644 > --- a/mm/kasan/shadow.c > +++ b/mm/kasan/shadow.c > @@ -69,10 +69,6 @@ void *memcpy(void *dest, const void *src, size_t len) > return __memcpy(dest, src, len); > } > > -/* > - * Poisons the shadow memory for 'size' bytes starting from 'addr'. > - * Memory addresses should be aligned to KASAN_GRANULE_SIZE. > - */ > void kasan_poison(const void *address, size_t size, u8 value) > { > void *shadow_start, *shadow_end; > @@ -83,12 +79,12 @@ void kasan_poison(const void *address, size_t size, u8 value) > * addresses to this function. > */ > address = kasan_reset_tag(address); > - size = round_up(size, KASAN_GRANULE_SIZE); > > /* Skip KFENCE memory if called explicitly outside of sl*b. */ > if (is_kfence_address(address)) > return; > > + size = round_up(size, KASAN_GRANULE_SIZE); > shadow_start = kasan_mem_to_shadow(address); > shadow_end = kasan_mem_to_shadow(address + size); > > @@ -96,6 +92,16 @@ void kasan_poison(const void *address, size_t size, u8 value) > } > EXPORT_SYMBOL(kasan_poison); > > +#ifdef CONFIG_KASAN_GENERIC > +void kasan_poison_last_granule(const void *address, size_t size) > +{ > + if (size & KASAN_GRANULE_MASK) { > + u8 *shadow = (u8 *)kasan_mem_to_shadow(address + size); > + *shadow = size & KASAN_GRANULE_MASK; > + } > +} > +#endif The function declaration still needs to exist in the dead branch if !IS_ENABLED(CONFIG_KASAN_GENERIC). It appears in that case it's declared (in kasan.h), but not defined. We shouldn't get linker errors because the optimizer should remove the dead branch. Nevertheless, is this code generally acceptable? > void kasan_unpoison(const void *address, size_t size) > { > u8 tag = get_tag(address); > @@ -115,16 +121,12 @@ void kasan_unpoison(const void *address, size_t size) > if (is_kfence_address(address)) > return; > > + /* Unpoison round_up(size, KASAN_GRANULE_SIZE) bytes. */ > kasan_poison(address, size, tag); > > - if (size & KASAN_GRANULE_MASK) { > - u8 *shadow = (u8 *)kasan_mem_to_shadow(address + size); > - > - if (IS_ENABLED(CONFIG_KASAN_SW_TAGS)) > - *shadow = tag; > - else /* CONFIG_KASAN_GENERIC */ > - *shadow = size & KASAN_GRANULE_MASK; > - } > + /* Partially poison the last granule for the generic mode. */ > + if (IS_ENABLED(CONFIG_KASAN_GENERIC)) > + kasan_poison_last_granule(address, size); > } > > #ifdef CONFIG_MEMORY_HOTPLUG > diff --git a/mm/slub.c b/mm/slub.c > index 176b1cb0d006..e564008c2329 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -3565,8 +3565,7 @@ static void early_kmem_cache_node_alloc(int node) > init_object(kmem_cache_node, n, SLUB_RED_ACTIVE); > init_tracking(kmem_cache_node, n); > #endif > - n = kasan_kmalloc(kmem_cache_node, n, sizeof(struct kmem_cache_node), > - GFP_KERNEL); > + n = kasan_slab_alloc(kmem_cache_node, n, GFP_KERNEL); > page->freelist = get_freepointer(kmem_cache_node, n); > page->inuse = 1; > page->frozen = 0; > -- > 2.30.0.365.g02bc693789-goog > _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel