From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.0 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BAECEC433E1 for ; Thu, 13 Aug 2020 15:19:47 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 6AB2C207DA for ; Thu, 13 Aug 2020 15:19:47 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6AB2C207DA Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 099F06B0010; Thu, 13 Aug 2020 11:19:47 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 04AEB6B0022; Thu, 13 Aug 2020 11:19:46 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E53928D0002; Thu, 13 Aug 2020 11:19:46 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0078.hostedemail.com [216.40.44.78]) by kanga.kvack.org (Postfix) with ESMTP id C3C3C6B0010 for ; Thu, 13 Aug 2020 11:19:46 -0400 (EDT) Received: from smtpin29.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 7A2928248047 for ; Thu, 13 Aug 2020 15:19:46 +0000 (UTC) X-FDA: 77145905172.29.root25_000ef2a26ff5 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin29.hostedemail.com (Postfix) with ESMTP id B5A3918086E3C for ; Thu, 13 Aug 2020 15:19:43 +0000 (UTC) X-HE-Tag: root25_000ef2a26ff5 X-Filterd-Recvd-Size: 21176 Received: from mail-wm1-f68.google.com (mail-wm1-f68.google.com [209.85.128.68]) by imf49.hostedemail.com (Postfix) with ESMTP for ; Thu, 13 Aug 2020 15:19:42 +0000 (UTC) Received: by mail-wm1-f68.google.com with SMTP id 184so5405005wmb.0 for ; Thu, 13 Aug 2020 08:19:42 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=P8DeDc5Ct3tTvMBOv5sqWQolQRiRapwMqRCVMUkdEF8=; b=GvifrEOcnFZ3OvYEtZKRlzjX+n59v2O949jsn2YOlnF5ExdHMds1Ocn5Ifk1OoiEdw 4rQtpFJVMD/0r4P52mqS/6RDrz2GMC7p/bUjXmpB1JHu75Y21r2MGL3PgyD3KdXZz+Ig 30Dgwfm9coag1SAZ0NCtlEyhjfJ6iJNTVNp6K02vmDYnsidD0uWIk2OEpLY0OLrxJ6Ko Ma88jVUiLssvslQHHP/3lWaZuARRF/WjDHwKcGBvpeyp37ufYxHti1/H8d2bSI6nulr2 PTNm51tdX/T0k4FEFK8P7IE1vD6g0zR1rx1MkM80PasNp78bQPqVxLyc0qQYBrkCdj7H gdwg== X-Gm-Message-State: AOAM530xFetcRPXUHAd2D+u9BJ6hCXC+rb+nXdl5CFxPuN3NNkA5TN4o 6E/D+31fi6VTEWo+w2AZ7mc= X-Google-Smtp-Source: ABdhPJziXNSlwWGftHvXEAbMx37KiA1iuQhhv19HjQMEdIUHQrRkCfjGs9VO1LTvFGRl3waVFRBe/A== X-Received: by 2002:a05:600c:224e:: with SMTP id a14mr5024287wmm.80.1597331981430; Thu, 13 Aug 2020 08:19:41 -0700 (PDT) Received: from localhost.localdomain ([185.248.161.177]) by smtp.gmail.com with ESMTPSA id d23sm10394044wmd.27.2020.08.13.08.19.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 13 Aug 2020 08:19:40 -0700 (PDT) From: Alexander Popov To: Kees Cook , Jann Horn , Will Deacon , Andrey Ryabinin , Alexander Potapenko , Dmitry Vyukov , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Masahiro Yamada , Masami Hiramatsu , Steven Rostedt , Peter Zijlstra , Krzysztof Kozlowski , Patrick Bellasi , David Howells , Eric Biederman , Johannes Weiner , Laura Abbott , Arnd Bergmann , Greg Kroah-Hartman , kasan-dev@googlegroups.com, linux-mm@kvack.org, kernel-hardening@lists.openwall.com, linux-kernel@vger.kernel.org, Alexander Popov Cc: notify@kernel.org Subject: [PATCH RFC 1/2] mm: Extract SLAB_QUARANTINE from KASAN Date: Thu, 13 Aug 2020 18:19:21 +0300 Message-Id: <20200813151922.1093791-2-alex.popov@linux.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200813151922.1093791-1-alex.popov@linux.com> References: <20200813151922.1093791-1-alex.popov@linux.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: B5A3918086E3C X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam02 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Heap spraying is an exploitation technique that aims to put controlled bytes at a predetermined memory location on the heap. Heap spraying for exploiting use-after-free in the Linux kernel relies on the fact that on kmalloc(), the slab allocator returns the address of the memory that was recently freed. Allocating a kernel object with the same size and controlled contents allows overwriting the vulnerable freed object. Let's extract slab freelist quarantine from KASAN functionality and call it CONFIG_SLAB_QUARANTINE. This feature breaks widespread heap spraying technique used for exploiting use-after-free vulnerabilities in the kernel code. If this feature is enabled, freed allocations are stored in the quarantin= e and can't be instantly reallocated and overwritten by the exploit performing heap spraying. Signed-off-by: Alexander Popov --- include/linux/kasan.h | 107 ++++++++++++++++++++----------------- include/linux/slab_def.h | 2 +- include/linux/slub_def.h | 2 +- init/Kconfig | 11 ++++ mm/Makefile | 3 +- mm/kasan/Makefile | 2 + mm/kasan/kasan.h | 75 +++++++++++++------------- mm/kasan/quarantine.c | 2 + mm/kasan/slab_quarantine.c | 99 ++++++++++++++++++++++++++++++++++ mm/slub.c | 2 +- 10 files changed, 216 insertions(+), 89 deletions(-) create mode 100644 mm/kasan/slab_quarantine.c diff --git a/include/linux/kasan.h b/include/linux/kasan.h index 087fba34b209..b837216f760c 100644 --- a/include/linux/kasan.h +++ b/include/linux/kasan.h @@ -42,32 +42,14 @@ void kasan_unpoison_task_stack(struct task_struct *ta= sk); void kasan_alloc_pages(struct page *page, unsigned int order); void kasan_free_pages(struct page *page, unsigned int order); =20 -void kasan_cache_create(struct kmem_cache *cache, unsigned int *size, - slab_flags_t *flags); - void kasan_poison_slab(struct page *page); void kasan_unpoison_object_data(struct kmem_cache *cache, void *object); void kasan_poison_object_data(struct kmem_cache *cache, void *object); void * __must_check kasan_init_slab_obj(struct kmem_cache *cache, const void *object); =20 -void * __must_check kasan_kmalloc_large(const void *ptr, size_t size, - gfp_t flags); void kasan_kfree_large(void *ptr, unsigned long ip); void kasan_poison_kfree(void *ptr, unsigned long ip); -void * __must_check kasan_kmalloc(struct kmem_cache *s, const void *obje= ct, - size_t size, gfp_t flags); -void * __must_check kasan_krealloc(const void *object, size_t new_size, - gfp_t flags); - -void * __must_check kasan_slab_alloc(struct kmem_cache *s, void *object, - gfp_t flags); -bool kasan_slab_free(struct kmem_cache *s, void *object, unsigned long i= p); - -struct kasan_cache { - int alloc_meta_offset; - int free_meta_offset; -}; =20 /* * These functions provide a special case to support backing module @@ -107,10 +89,6 @@ static inline void kasan_disable_current(void) {} static inline void kasan_alloc_pages(struct page *page, unsigned int ord= er) {} static inline void kasan_free_pages(struct page *page, unsigned int orde= r) {} =20 -static inline void kasan_cache_create(struct kmem_cache *cache, - unsigned int *size, - slab_flags_t *flags) {} - static inline void kasan_poison_slab(struct page *page) {} static inline void kasan_unpoison_object_data(struct kmem_cache *cache, void *object) {} @@ -122,17 +100,65 @@ static inline void *kasan_init_slab_obj(struct kmem= _cache *cache, return (void *)object; } =20 +static inline void kasan_kfree_large(void *ptr, unsigned long ip) {} +static inline void kasan_poison_kfree(void *ptr, unsigned long ip) {} +static inline void kasan_free_shadow(const struct vm_struct *vm) {} +static inline void kasan_remove_zero_shadow(void *start, unsigned long s= ize) {} +static inline void kasan_unpoison_slab(const void *ptr) {} + +static inline int kasan_module_alloc(void *addr, size_t size) +{ + return 0; +} + +static inline int kasan_add_zero_shadow(void *start, unsigned long size) +{ + return 0; +} + +static inline size_t kasan_metadata_size(struct kmem_cache *cache) +{ + return 0; +} + +#endif /* CONFIG_KASAN */ + +struct kasan_cache { + int alloc_meta_offset; + int free_meta_offset; +}; + +#if defined(CONFIG_KASAN) || defined(CONFIG_SLAB_QUARANTINE) + +void kasan_cache_create(struct kmem_cache *cache, unsigned int *size, + slab_flags_t *flags); +void * __must_check kasan_kmalloc_large(const void *ptr, size_t size, + gfp_t flags); +void * __must_check kasan_kmalloc(struct kmem_cache *s, const void *obje= ct, + size_t size, gfp_t flags); +void * __must_check kasan_krealloc(const void *object, size_t new_size, + gfp_t flags); +void * __must_check kasan_slab_alloc(struct kmem_cache *s, void *object, + gfp_t flags); +bool kasan_slab_free(struct kmem_cache *s, void *object, unsigned long i= p); + +#else /* CONFIG_KASAN || CONFIG_SLAB_QUARANTINE */ + +static inline void kasan_cache_create(struct kmem_cache *cache, + unsigned int *size, + slab_flags_t *flags) {} + static inline void *kasan_kmalloc_large(void *ptr, size_t size, gfp_t fl= ags) { return ptr; } -static inline void kasan_kfree_large(void *ptr, unsigned long ip) {} -static inline void kasan_poison_kfree(void *ptr, unsigned long ip) {} + static inline void *kasan_kmalloc(struct kmem_cache *s, const void *obje= ct, size_t size, gfp_t flags) { return (void *)object; } + static inline void *kasan_krealloc(const void *object, size_t new_size, gfp_t flags) { @@ -144,43 +170,28 @@ static inline void *kasan_slab_alloc(struct kmem_ca= che *s, void *object, { return object; } + static inline bool kasan_slab_free(struct kmem_cache *s, void *object, unsigned long ip) { return false; } - -static inline int kasan_module_alloc(void *addr, size_t size) { return 0= ; } -static inline void kasan_free_shadow(const struct vm_struct *vm) {} - -static inline int kasan_add_zero_shadow(void *start, unsigned long size) -{ - return 0; -} -static inline void kasan_remove_zero_shadow(void *start, - unsigned long size) -{} - -static inline void kasan_unpoison_slab(const void *ptr) { } -static inline size_t kasan_metadata_size(struct kmem_cache *cache) { ret= urn 0; } - -#endif /* CONFIG_KASAN */ +#endif /* CONFIG_KASAN || CONFIG_SLAB_QUARANTINE */ =20 #ifdef CONFIG_KASAN_GENERIC - #define KASAN_SHADOW_INIT 0 - -void kasan_cache_shrink(struct kmem_cache *cache); -void kasan_cache_shutdown(struct kmem_cache *cache); void kasan_record_aux_stack(void *ptr); - #else /* CONFIG_KASAN_GENERIC */ +static inline void kasan_record_aux_stack(void *ptr) {} +#endif /* CONFIG_KASAN_GENERIC */ =20 +#if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_SLAB_QUARANTINE) +void kasan_cache_shrink(struct kmem_cache *cache); +void kasan_cache_shutdown(struct kmem_cache *cache); +#else /* CONFIG_KASAN_GENERIC || CONFIG_SLAB_QUARANTINE */ static inline void kasan_cache_shrink(struct kmem_cache *cache) {} static inline void kasan_cache_shutdown(struct kmem_cache *cache) {} -static inline void kasan_record_aux_stack(void *ptr) {} - -#endif /* CONFIG_KASAN_GENERIC */ +#endif /* CONFIG_KASAN_GENERIC || CONFIG_SLAB_QUARANTINE */ =20 #ifdef CONFIG_KASAN_SW_TAGS =20 diff --git a/include/linux/slab_def.h b/include/linux/slab_def.h index 9eb430c163c2..fc7548f27512 100644 --- a/include/linux/slab_def.h +++ b/include/linux/slab_def.h @@ -72,7 +72,7 @@ struct kmem_cache { int obj_offset; #endif /* CONFIG_DEBUG_SLAB */ =20 -#ifdef CONFIG_KASAN +#if defined(CONFIG_KASAN) || defined(CONFIG_SLAB_QUARANTINE) struct kasan_cache kasan_info; #endif =20 diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h index 1be0ed5befa1..71020cee9fd2 100644 --- a/include/linux/slub_def.h +++ b/include/linux/slub_def.h @@ -124,7 +124,7 @@ struct kmem_cache { unsigned int *random_seq; #endif =20 -#ifdef CONFIG_KASAN +#if defined(CONFIG_KASAN) || defined(CONFIG_SLAB_QUARANTINE) struct kasan_cache kasan_info; #endif =20 diff --git a/init/Kconfig b/init/Kconfig index d6a0b31b13dc..de5aa061762f 100644 --- a/init/Kconfig +++ b/init/Kconfig @@ -1931,6 +1931,17 @@ config SLAB_FREELIST_HARDENED sanity-checking than others. This option is most effective with CONFIG_SLUB. =20 +config SLAB_QUARANTINE + bool "Enable slab freelist quarantine" + depends on !KASAN && (SLAB || SLUB) + help + Enable slab freelist quarantine to break heap spraying technique + used for exploiting use-after-free vulnerabilities in the kernel + code. If this feature is enabled, freed allocations are stored + in the quarantine and can't be instantly reallocated and + overwritten by the exploit performing heap spraying. + This feature is a part of KASAN functionality. + config SHUFFLE_PAGE_ALLOCATOR bool "Page allocator randomization" default SLAB_FREELIST_RANDOM && ACPI_NUMA diff --git a/mm/Makefile b/mm/Makefile index d5649f1c12c0..c052bc616a88 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -52,7 +52,7 @@ obj-y :=3D filemap.o mempool.o oom_kill.o fadvise.o \ mm_init.o percpu.o slab_common.o \ compaction.o vmacache.o \ interval_tree.o list_lru.o workingset.o \ - debug.o gup.o $(mmu-y) + debug.o gup.o kasan/ $(mmu-y) =20 # Give 'page_alloc' its own module-parameter namespace page-alloc-y :=3D page_alloc.o @@ -80,7 +80,6 @@ obj-$(CONFIG_KSM) +=3D ksm.o obj-$(CONFIG_PAGE_POISONING) +=3D page_poison.o obj-$(CONFIG_SLAB) +=3D slab.o obj-$(CONFIG_SLUB) +=3D slub.o -obj-$(CONFIG_KASAN) +=3D kasan/ obj-$(CONFIG_FAILSLAB) +=3D failslab.o obj-$(CONFIG_MEMORY_HOTPLUG) +=3D memory_hotplug.o obj-$(CONFIG_MEMTEST) +=3D memtest.o diff --git a/mm/kasan/Makefile b/mm/kasan/Makefile index 370d970e5ab5..f6367d56a4d0 100644 --- a/mm/kasan/Makefile +++ b/mm/kasan/Makefile @@ -32,3 +32,5 @@ CFLAGS_tags_report.o :=3D $(CC_FLAGS_KASAN_RUNTIME) obj-$(CONFIG_KASAN) :=3D common.o init.o report.o obj-$(CONFIG_KASAN_GENERIC) +=3D generic.o generic_report.o quarantine.o obj-$(CONFIG_KASAN_SW_TAGS) +=3D tags.o tags_report.o + +obj-$(CONFIG_SLAB_QUARANTINE) +=3D slab_quarantine.o quarantine.o diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h index ac499456740f..979c5600db8c 100644 --- a/mm/kasan/kasan.h +++ b/mm/kasan/kasan.h @@ -5,6 +5,43 @@ #include #include =20 +struct qlist_node { + struct qlist_node *next; +}; + +struct kasan_track { + u32 pid; + depot_stack_handle_t stack; +}; + +struct kasan_free_meta { + /* This field is used while the object is in the quarantine. + * Otherwise it might be used for the allocator freelist. + */ + struct qlist_node quarantine_link; +#ifdef CONFIG_KASAN_GENERIC + struct kasan_track free_track; +#endif +}; + +struct kasan_free_meta *get_free_info(struct kmem_cache *cache, + const void *object); + +#if defined(CONFIG_KASAN_GENERIC) && \ + (defined(CONFIG_SLAB) || defined(CONFIG_SLUB)) || \ + defined(CONFIG_SLAB_QUARANTINE) +void quarantine_put(struct kasan_free_meta *info, struct kmem_cache *cac= he); +void quarantine_reduce(void); +void quarantine_remove_cache(struct kmem_cache *cache); +#else +static inline void quarantine_put(struct kasan_free_meta *info, + struct kmem_cache *cache) { } +static inline void quarantine_reduce(void) { } +static inline void quarantine_remove_cache(struct kmem_cache *cache) { } +#endif + +#ifdef CONFIG_KASAN + #define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT) #define KASAN_SHADOW_MASK (KASAN_SHADOW_SCALE_SIZE - 1) =20 @@ -87,17 +124,8 @@ struct kasan_global { #endif }; =20 -/** - * Structures to keep alloc and free tracks * - */ - #define KASAN_STACK_DEPTH 64 =20 -struct kasan_track { - u32 pid; - depot_stack_handle_t stack; -}; - #ifdef CONFIG_KASAN_SW_TAGS_IDENTIFY #define KASAN_NR_FREE_STACKS 5 #else @@ -121,23 +149,8 @@ struct kasan_alloc_meta { #endif }; =20 -struct qlist_node { - struct qlist_node *next; -}; -struct kasan_free_meta { - /* This field is used while the object is in the quarantine. - * Otherwise it might be used for the allocator freelist. - */ - struct qlist_node quarantine_link; -#ifdef CONFIG_KASAN_GENERIC - struct kasan_track free_track; -#endif -}; - struct kasan_alloc_meta *get_alloc_info(struct kmem_cache *cache, const void *object); -struct kasan_free_meta *get_free_info(struct kmem_cache *cache, - const void *object); =20 static inline const void *kasan_shadow_to_mem(const void *shadow_addr) { @@ -178,18 +191,6 @@ void kasan_set_free_info(struct kmem_cache *cache, v= oid *object, u8 tag); struct kasan_track *kasan_get_free_track(struct kmem_cache *cache, void *object, u8 tag); =20 -#if defined(CONFIG_KASAN_GENERIC) && \ - (defined(CONFIG_SLAB) || defined(CONFIG_SLUB)) -void quarantine_put(struct kasan_free_meta *info, struct kmem_cache *cac= he); -void quarantine_reduce(void); -void quarantine_remove_cache(struct kmem_cache *cache); -#else -static inline void quarantine_put(struct kasan_free_meta *info, - struct kmem_cache *cache) { } -static inline void quarantine_reduce(void) { } -static inline void quarantine_remove_cache(struct kmem_cache *cache) { } -#endif - #ifdef CONFIG_KASAN_SW_TAGS =20 void print_tags(u8 addr_tag, const void *addr); @@ -296,4 +297,6 @@ void __hwasan_storeN_noabort(unsigned long addr, size= _t size); =20 void __hwasan_tag_memory(unsigned long addr, u8 tag, unsigned long size)= ; =20 +#endif /* CONFIG_KASAN */ + #endif diff --git a/mm/kasan/quarantine.c b/mm/kasan/quarantine.c index 4c5375810449..61666263c53e 100644 --- a/mm/kasan/quarantine.c +++ b/mm/kasan/quarantine.c @@ -145,7 +145,9 @@ static void qlink_free(struct qlist_node *qlink, stru= ct kmem_cache *cache) if (IS_ENABLED(CONFIG_SLAB)) local_irq_save(flags); =20 +#ifdef CONFIG_KASAN *(u8 *)kasan_mem_to_shadow(object) =3D KASAN_KMALLOC_FREE; +#endif ___cache_free(cache, object, _THIS_IP_); =20 if (IS_ENABLED(CONFIG_SLAB)) diff --git a/mm/kasan/slab_quarantine.c b/mm/kasan/slab_quarantine.c new file mode 100644 index 000000000000..5764aa7ad253 --- /dev/null +++ b/mm/kasan/slab_quarantine.c @@ -0,0 +1,99 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * The layer providing KASAN slab quarantine separately without the + * main KASAN functionality. + * + * Author: Alexander Popov + * + * This feature breaks widespread heap spraying technique used for + * exploiting use-after-free vulnerabilities in the kernel code. + * + * Heap spraying is an exploitation technique that aims to put controlle= d + * bytes at a predetermined memory location on the heap. Heap spraying f= or + * exploiting use-after-free in the Linux kernel relies on the fact that= on + * kmalloc(), the slab allocator returns the address of the memory that = was + * recently freed. Allocating a kernel object with the same size and + * controlled contents allows overwriting the vulnerable freed object. + * + * If freed allocations are stored in the quarantine, they can't be + * instantly reallocated and overwritten by the exploit performing + * heap spraying. + */ + +#include +#include +#include +#include +#include "../slab.h" +#include "kasan.h" + +void kasan_cache_create(struct kmem_cache *cache, unsigned int *size, + slab_flags_t *flags) +{ + cache->kasan_info.alloc_meta_offset =3D 0; + + if (cache->flags & SLAB_TYPESAFE_BY_RCU || cache->ctor || + cache->object_size < sizeof(struct kasan_free_meta)) { + cache->kasan_info.free_meta_offset =3D *size; + *size +=3D sizeof(struct kasan_free_meta); + BUG_ON(*size > KMALLOC_MAX_SIZE); + } + + *flags |=3D SLAB_KASAN; +} + +struct kasan_free_meta *get_free_info(struct kmem_cache *cache, + const void *object) +{ + BUILD_BUG_ON(sizeof(struct kasan_free_meta) > 32); + return (void *)object + cache->kasan_info.free_meta_offset; +} + +bool kasan_slab_free(struct kmem_cache *cache, void *object, unsigned lo= ng ip) +{ + quarantine_put(get_free_info(cache, object), cache); + return true; +} + +static void *reduce_helper(const void *ptr, gfp_t flags) +{ + if (gfpflags_allow_blocking(flags)) + quarantine_reduce(); + + return (void *)ptr; +} + +void * __must_check kasan_kmalloc_large(const void *ptr, size_t size, + gfp_t flags) +{ + return reduce_helper(ptr, flags); +} + +void * __must_check kasan_krealloc(const void *object, size_t size, gfp_= t flags) +{ + return reduce_helper(object, flags); +} + +void * __must_check kasan_slab_alloc(struct kmem_cache *cache, void *obj= ect, + gfp_t flags) +{ + return reduce_helper(object, flags); +} + +void * __must_check kasan_kmalloc(struct kmem_cache *cache, const void *= object, + size_t size, gfp_t flags) +{ + return reduce_helper(object, flags); +} +EXPORT_SYMBOL(kasan_kmalloc); + +void kasan_cache_shrink(struct kmem_cache *cache) +{ + quarantine_remove_cache(cache); +} + +void kasan_cache_shutdown(struct kmem_cache *cache) +{ + if (!__kmem_cache_empty(cache)) + quarantine_remove_cache(cache); +} diff --git a/mm/slub.c b/mm/slub.c index 68c02b2eecd9..8d6620effa3c 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3143,7 +3143,7 @@ static __always_inline void slab_free(struct kmem_c= ache *s, struct page *page, do_slab_free(s, page, head, tail, cnt, addr); } =20 -#ifdef CONFIG_KASAN_GENERIC +#if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_SLAB_QUARANTINE) void ___cache_free(struct kmem_cache *cache, void *x, unsigned long addr= ) { do_slab_free(cache, virt_to_head_page(x), x, NULL, 1, addr); --=20 2.26.2