From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.3 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_SANE_2 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 07443CA9EA9 for ; Fri, 18 Oct 2019 14:42:17 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A3C1C222C6 for ; Fri, 18 Oct 2019 14:42:16 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lca.pw header.i=@lca.pw header.b="hJ2hpomJ" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A3C1C222C6 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=lca.pw Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 3A4908E0006; Fri, 18 Oct 2019 10:42:16 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 354748E0003; Fri, 18 Oct 2019 10:42:16 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 26A228E0006; Fri, 18 Oct 2019 10:42:16 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0252.hostedemail.com [216.40.44.252]) by kanga.kvack.org (Postfix) with ESMTP id 05DB38E0003 for ; Fri, 18 Oct 2019 10:42:15 -0400 (EDT) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with SMTP id 8620C1801ABE8 for ; Fri, 18 Oct 2019 14:42:15 +0000 (UTC) X-FDA: 76057170630.23.drain86_6af903c8a3e49 X-HE-Tag: drain86_6af903c8a3e49 X-Filterd-Recvd-Size: 13257 Received: from mail-qt1-f193.google.com (mail-qt1-f193.google.com [209.85.160.193]) by imf28.hostedemail.com (Postfix) with ESMTP for ; Fri, 18 Oct 2019 14:42:14 +0000 (UTC) Received: by mail-qt1-f193.google.com with SMTP id j31so9425949qta.5 for ; Fri, 18 Oct 2019 07:42:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=lca.pw; s=google; h=message-id:subject:from:to:cc:date:in-reply-to:references :mime-version:content-transfer-encoding; bh=0SVM41IlxRkabgNpQtCqSOTaZ9JXCeFij39UDwsxv9o=; b=hJ2hpomJjjraZF0DBJFMb515gHnALRwnYtwPuYRa5h1vohsyHZIITliWE9Tvyhuv6t gffCb0qXzn7i9XNIY410TepTDEY6disHRIyWs7FOKg8fAKwUvVmo3XyZWMTzOBWkilEW 6DA+78ZXz7Ms5Ax3LOSu2FrUxH6uKOildsMVm90CCaBS0SM83Bz8sZeLOlMmQxEEcGcY 7cZafZKQd/5z3I6KBC9IAJ1FDxNVX0mhjAhapfhjh7HisFbMYkyD1hJ5r7djgj5ZZqCC iXYmAzE9c6A9K5j6n/Opr/TVVlZrI01qZXT7KKroSCgnwjBZ8HSUeDdg++FhHNVLhwIz +eHg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:message-id:subject:from:to:cc:date:in-reply-to :references:mime-version:content-transfer-encoding; bh=0SVM41IlxRkabgNpQtCqSOTaZ9JXCeFij39UDwsxv9o=; b=FZT8XASZgT3Tbj4oUvgmXWaVaBsEucnaEYTFLMI+tTki3yh8GNLL5QdtRKUUFpxijs OCcHSm5mBH/4JCteSZx1+Dq7zvSetIPJb0fzCdh/xQowcVqJ5uQ0qE3VAu0XVsFt4yyD hcyELjJlG857vX3/k4GJsxo0vyX/TZ2+AxOMIg1GAwAPFiD8zAVE1h6UbtWdYVCMms2d e8sjmf7ZqYOzvp6SAMtaPavi+TjHE748k63fgHWKuxa8Tykyf7ftaGS0gli28O/x0IYK JCylVAfwoQWrvFOrga3aNYhraEK4hTvGwzm9li1rba2azGogVOXFD+YOfK5M5F9mg6+n IIIw== X-Gm-Message-State: APjAAAUrRdvnG7xi5nBWAxTpvlzWTp2aEy1awveTjDNPOprOY1uJrh3J KRPC9hImD6UasNRZIYBSFRKq6Q== X-Google-Smtp-Source: APXvYqyj/tCUNGCfrJ0FyHryn9Loyf3D7BVqytRyu55yN2Tl41zTICiZRrINVoZrHVTGBsxUjF7K9g== X-Received: by 2002:a05:6214:134d:: with SMTP id b13mr9792678qvw.228.1571409734148; Fri, 18 Oct 2019 07:42:14 -0700 (PDT) Received: from dhcp-41-57.bos.redhat.com (nat-pool-bos-t.redhat.com. [66.187.233.206]) by smtp.gmail.com with ESMTPSA id u123sm2961436qkh.120.2019.10.18.07.42.13 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 18 Oct 2019 07:42:13 -0700 (PDT) Message-ID: <1571409732.5937.76.camel@lca.pw> Subject: Re: [PATCH RFC v1 18/26] kmsan: mm: call KMSAN hooks from SLUB code From: Qian Cai To: Alexander Potapenko Cc: Andrew Morton , Vegard Nossum , Dmitry Vyukov , Linux Memory Management List Date: Fri, 18 Oct 2019 10:42:12 -0400 In-Reply-To: References: <20191018094304.37056-1-glider@google.com> <20191018094304.37056-19-glider@google.com> <1571404943.5937.71.camel@lca.pw> <1571406117.5937.73.camel@lca.pw> Content-Type: text/plain; charset="UTF-8" X-Mailer: Evolution 3.22.6 (3.22.6-10.el7) Mime-Version: 1.0 Content-Transfer-Encoding: 7bit X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, 2019-10-18 at 15:55 +0200, Alexander Potapenko wrote: > On Fri, Oct 18, 2019 at 3:42 PM Qian Cai wrote: > > > > On Fri, 2019-10-18 at 15:33 +0200, Alexander Potapenko wrote: > > > On Fri, Oct 18, 2019 at 3:22 PM Qian Cai wrote: > > > > > > > > On Fri, 2019-10-18 at 11:42 +0200, glider@google.com wrote: > > > > > In order to report uninitialized memory coming from heap allocations > > > > > KMSAN has to poison them unless they're created with __GFP_ZERO. > > > > > > > > > > It's handy that we need KMSAN hooks in the places where > > > > > init_on_alloc/init_on_free initialization is performed. > > > > > > > > Well, there is SLUB debug which has red zoning and poisoning checks. What's > > > > value of this patch? > > > > > > Sorry, are you talking about the whole patchset or just this patch? > > > > Just this patch. > > > > > Note that SLUB debug is unable to detect uninitialized values with > > > bit-to-bit precision, neither have I heard of anyone using it for > > > detecting uses of uninitialized memory in the kernel at all. > > > The purpose of SLUB debug is to detect corruptions of freed memory. > > > > The point is if developers have SLUB debug enabled, all the free objects will be > > poisoned, so what's the point of checking uninitialized memory there? > > You are right, SLUB debug has to be handled separately. If I'm > understanding correctly, right now KMSAN poisons freed memory before > SLUB debug wipes it, therefore the memory will count as initialized. > On the other hand, newly allocated memory is still marked as > uninitialized, so a lot of bugs still remain detectable. Yes, but newly allocated slub objects will be poisoned as well. > TBH, I haven't tested KMSAN with SLUB debug good enough. Note that > it's anyway a separate build that requires Clang, so right now it > doesn't make much sense to combine KMSAN and SLUB debug together. Can't you just build a debug kernel with SLUB debug enabled but dropping this patch? If there is an uninitialized memory here leading to data corruption, SLUB debug should be detected as well as this patch. If not, it needs to understand why. > > > > > > > > > > Signed-off-by: Alexander Potapenko > > > > > To: Alexander Potapenko > > > > > Cc: Andrew Morton > > > > > Cc: Vegard Nossum > > > > > Cc: Dmitry Vyukov > > > > > Cc: linux-mm@kvack.org > > > > > --- > > > > > > > > > > Change-Id: I51103b7981d3aabed747d0c85cbdc85568665871 > > > > > --- > > > > > mm/slub.c | 37 +++++++++++++++++++++++++++++++------ > > > > > 1 file changed, 31 insertions(+), 6 deletions(-) > > > > > > > > > > diff --git a/mm/slub.c b/mm/slub.c > > > > > index 3d63ae320d31..3d6d4c63446e 100644 > > > > > --- a/mm/slub.c > > > > > +++ b/mm/slub.c > > > > > @@ -21,6 +21,8 @@ > > > > > #include > > > > > #include > > > > > #include > > > > > +#include > > > > > +#include /* KMSAN_INIT_VALUE */ > > > > > #include > > > > > #include > > > > > #include > > > > > @@ -285,17 +287,27 @@ static void prefetch_freepointer(const struct kmem_cache *s, void *object) > > > > > prefetch(object + s->offset); > > > > > } > > > > > > > > > > +/* > > > > > + * When running under KMSAN, get_freepointer_safe() may return an uninitialized > > > > > + * pointer value in the case the current thread loses the race for the next > > > > > + * memory chunk in the freelist. In that case this_cpu_cmpxchg_double() in > > > > > + * slab_alloc_node() will fail, so the uninitialized value won't be used, but > > > > > + * KMSAN will still check all arguments of cmpxchg because of imperfect > > > > > + * handling of inline assembly. > > > > > + * To work around this problem, use KMSAN_INIT_VALUE() to force initialize the > > > > > + * return value of get_freepointer_safe(). > > > > > + */ > > > > > static inline void *get_freepointer_safe(struct kmem_cache *s, void *object) > > > > > { > > > > > unsigned long freepointer_addr; > > > > > void *p; > > > > > > > > > > if (!debug_pagealloc_enabled()) > > > > > - return get_freepointer(s, object); > > > > > + return KMSAN_INIT_VALUE(get_freepointer(s, object)); > > > > > > > > > > freepointer_addr = (unsigned long)object + s->offset; > > > > > probe_kernel_read(&p, (void **)freepointer_addr, sizeof(p)); > > > > > - return freelist_ptr(s, p, freepointer_addr); > > > > > + return KMSAN_INIT_VALUE(freelist_ptr(s, p, freepointer_addr)); > > > > > } > > > > > > > > > > static inline void set_freepointer(struct kmem_cache *s, void *object, void *fp) > > > > > @@ -1390,6 +1402,7 @@ static inline void *kmalloc_large_node_hook(void *ptr, size_t size, gfp_t flags) > > > > > ptr = kasan_kmalloc_large(ptr, size, flags); > > > > > /* As ptr might get tagged, call kmemleak hook after KASAN. */ > > > > > kmemleak_alloc(ptr, size, 1, flags); > > > > > + kmsan_kmalloc_large(ptr, size, flags); > > > > > return ptr; > > > > > } > > > > > > > > > > @@ -1397,6 +1410,7 @@ static __always_inline void kfree_hook(void *x) > > > > > { > > > > > kmemleak_free(x); > > > > > kasan_kfree_large(x, _RET_IP_); > > > > > + kmsan_kfree_large(x); > > > > > } > > > > > > > > > > static __always_inline bool slab_free_hook(struct kmem_cache *s, void *x) > > > > > @@ -1453,6 +1467,12 @@ static inline bool slab_free_freelist_hook(struct kmem_cache *s, > > > > > } while (object != old_tail); > > > > > } > > > > > > > > > > + do { > > > > > + object = next; > > > > > + next = get_freepointer(s, object); > > > > > + kmsan_slab_free(s, object); > > > > > + } while (object != old_tail); > > > > > + > > > > > /* > > > > > * Compiler cannot detect this function can be removed if slab_free_hook() > > > > > * evaluates to nothing. Thus, catch all relevant config debug options here. > > > > > @@ -2769,6 +2789,7 @@ static __always_inline void *slab_alloc_node(struct kmem_cache *s, > > > > > if (unlikely(slab_want_init_on_alloc(gfpflags, s)) && object) > > > > > memset(object, 0, s->object_size); > > > > > > > > > > + kmsan_slab_alloc(s, object, gfpflags); > > > > > slab_post_alloc_hook(s, gfpflags, 1, &object); > > > > > > > > > > return object; > > > > > @@ -2797,6 +2818,7 @@ void *kmem_cache_alloc_trace(struct kmem_cache *s, gfp_t gfpflags, size_t size) > > > > > void *ret = slab_alloc(s, gfpflags, _RET_IP_); > > > > > trace_kmalloc(_RET_IP_, ret, size, s->size, gfpflags); > > > > > ret = kasan_kmalloc(s, ret, size, gfpflags); > > > > > + > > > > > return ret; > > > > > } > > > > > EXPORT_SYMBOL(kmem_cache_alloc_trace); > > > > > @@ -2809,7 +2831,6 @@ void *kmem_cache_alloc_node(struct kmem_cache *s, gfp_t gfpflags, int node) > > > > > > > > > > trace_kmem_cache_alloc_node(_RET_IP_, ret, > > > > > s->object_size, s->size, gfpflags, node); > > > > > - > > > > > return ret; > > > > > } > > > > > EXPORT_SYMBOL(kmem_cache_alloc_node); > > > > > @@ -2825,6 +2846,7 @@ void *kmem_cache_alloc_node_trace(struct kmem_cache *s, > > > > > size, s->size, gfpflags, node); > > > > > > > > > > ret = kasan_kmalloc(s, ret, size, gfpflags); > > > > > + > > > > > return ret; > > > > > } > > > > > EXPORT_SYMBOL(kmem_cache_alloc_node_trace); > > > > > @@ -3150,7 +3172,7 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, > > > > > void **p) > > > > > { > > > > > struct kmem_cache_cpu *c; > > > > > - int i; > > > > > + int i, j; > > > > > > > > > > /* memcg and kmem_cache debug support */ > > > > > s = slab_pre_alloc_hook(s, flags); > > > > > @@ -3188,11 +3210,11 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, > > > > > > > > > > /* Clear memory outside IRQ disabled fastpath loop */ > > > > > if (unlikely(slab_want_init_on_alloc(flags, s))) { > > > > > - int j; > > > > > - > > > > > for (j = 0; j < i; j++) > > > > > memset(p[j], 0, s->object_size); > > > > > } > > > > > + for (j = 0; j < i; j++) > > > > > + kmsan_slab_alloc(s, p[j], flags); > > > > > > > > > > /* memcg and kmem_cache debug support */ > > > > > slab_post_alloc_hook(s, flags, size, p); > > > > > @@ -3793,6 +3815,7 @@ static int __init setup_slub_min_objects(char *str) > > > > > > > > > > __setup("slub_min_objects=", setup_slub_min_objects); > > > > > > > > > > +__no_sanitize_memory > > > > > void *__kmalloc(size_t size, gfp_t flags) > > > > > { > > > > > struct kmem_cache *s; > > > > > @@ -5698,6 +5721,7 @@ static char *create_unique_id(struct kmem_cache *s) > > > > > p += sprintf(p, "%07u", s->size); > > > > > > > > > > BUG_ON(p > name + ID_STR_LENGTH - 1); > > > > > + kmsan_unpoison_shadow(name, p - name); > > > > > return name; > > > > > } > > > > > > > > > > @@ -5847,6 +5871,7 @@ static int sysfs_slab_alias(struct kmem_cache *s, const char *name) > > > > > al->name = name; > > > > > al->next = alias_list; > > > > > alias_list = al; > > > > > + kmsan_unpoison_shadow(al, sizeof(struct saved_alias)); > > > > > return 0; > > > > > } > > > > > > > > > > > > > > > > >