All of lore.kernel.org
 help / color / mirror / Atom feed
From: Dmitry Vyukov <dvyukov@google.com>
To: Andrey Konovalov <andreyknvl@google.com>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>,
	Alexander Potapenko <glider@google.com>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Will Deacon <will.deacon@arm.com>,
	Christoph Lameter <cl@linux.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Mark Rutland <mark.rutland@arm.com>,
	Nick Desaulniers <ndesaulniers@google.com>,
	Marc Zyngier <marc.zyngier@arm.com>,
	Dave Martin <dave.martin@arm.com>,
	Ard Biesheuvel <ard.biesheuvel@linaro.org>,
	"Eric W . Biederman" <ebiederm@xmission.com>,
	Ingo Molnar <mingo@kernel.org>,
	Paul Lawrence <paullawrence@google.com>,
	Geert Uytterhoeven <geert@linux-m68k.org>,
	Arnd Bergmann <arnd@arndb.de>,
	"Kirill A . Shutemov" <kirill.shutemov@linux.intel.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Kate Stewart <kstewart@linuxfoundation.org>,
	Mike Rapoport <rppt@linux.vnet.ibm.com>,
	kasan-dev <kasan-dev@googlegroups.com>,
	linux-doc@vger.kernel.org, LKML <linux-kernel@vger.kernel.org>,
	Linux ARM <linux-arm-kernel@lists.infradead.org>,
	linux-sparse@vger.kernel.org, Linux-MM <linux-mm@kvack.org>,
	"open list:KERNEL BUILD + fi..." <linux-kbuild@vger.kernel.org>,
	Kostya Serebryany <kcc@google.com>,
	Evgeniy Stepanov <eugenis@google.com>,
	Lee Smith <Lee.Smith@arm.com>,
	Ramana Radhakrishnan <Ramana.Radhakrishnan@arm.com>,
	Jacob Bramley <Jacob.Bramley@arm.com>,
	Ruben Ayrapetyan <Ruben.Ayrapetyan@arm.com>,
	Jann Horn <jannh@google.com>, Mark Brand <markbrand@google.com>,
	Chintan Pandya <cpandya@codeaurora.org>,
	Vishwath Mohan <vishwath@google.com>
Subject: Re: [PATCH v6 14/18] khwasan: add hooks implementation
Date: Wed, 12 Sep 2018 20:30:32 +0200	[thread overview]
Message-ID: <CACT4Y+YicYhmzrKf84=oJJErdFKSNM70cmoN3m_zzERcUQ_-Fg@mail.gmail.com> (raw)
In-Reply-To: <4267d0903e0fdf9c261b91cf8a2bf0f71047a43c.1535462971.git.andreyknvl@google.com>

On Wed, Aug 29, 2018 at 1:35 PM, Andrey Konovalov <andreyknvl@google.com> wrote:
> This commit adds KHWASAN specific hooks implementation and adjusts
> common KASAN and KHWASAN ones.
>
> 1. When a new slab cache is created, KHWASAN rounds up the size of the
>    objects in this cache to KASAN_SHADOW_SCALE_SIZE (== 16).
>
> 2. On each kmalloc KHWASAN generates a random tag, sets the shadow memory,
>    that corresponds to this object to this tag, and embeds this tag value
>    into the top byte of the returned pointer.
>
> 3. On each kfree KHWASAN poisons the shadow memory with a random tag to
>    allow detection of use-after-free bugs.
>
> The rest of the logic of the hook implementation is very much similar to
> the one provided by KASAN. KHWASAN saves allocation and free stack metadata
> to the slab object the same was KASAN does this.
>
> Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
> ---
>  mm/kasan/common.c  | 82 +++++++++++++++++++++++++++++++++++-----------
>  mm/kasan/kasan.h   |  8 +++++
>  mm/kasan/khwasan.c | 40 ++++++++++++++++++++++
>  3 files changed, 111 insertions(+), 19 deletions(-)
>
> diff --git a/mm/kasan/common.c b/mm/kasan/common.c
> index bed8e13c6e1d..938229b26f3a 100644
> --- a/mm/kasan/common.c
> +++ b/mm/kasan/common.c
> @@ -140,6 +140,9 @@ void kasan_poison_shadow(const void *address, size_t size, u8 value)
>  {
>         void *shadow_start, *shadow_end;
>
> +       /* Perform shadow offset calculation based on untagged address */
> +       address = reset_tag(address);
> +
>         shadow_start = kasan_mem_to_shadow(address);
>         shadow_end = kasan_mem_to_shadow(address + size);
>
> @@ -148,11 +151,20 @@ void kasan_poison_shadow(const void *address, size_t size, u8 value)
>
>  void kasan_unpoison_shadow(const void *address, size_t size)
>  {
> -       kasan_poison_shadow(address, size, 0);
> +       u8 tag = get_tag(address);
> +
> +       /* Perform shadow offset calculation based on untagged address */

The comment is not super-useful. It would be more useful to say why we
need to do this.
Most callers explicitly untag pointer passed to this function, for
some it's unclear if the pointer contains tag or not.
For example, __hwasan_tag_memory -- what does it accept? Tagged or untagged?


> +       address = reset_tag(address);
> +
> +       kasan_poison_shadow(address, size, tag);
>
>         if (size & KASAN_SHADOW_MASK) {
>                 u8 *shadow = (u8 *)kasan_mem_to_shadow(address + size);
> -               *shadow = size & KASAN_SHADOW_MASK;
> +
> +               if (IS_ENABLED(CONFIG_KASAN_HW))
> +                       *shadow = tag;
> +               else
> +                       *shadow = size & KASAN_SHADOW_MASK;
>         }
>  }


It seems that this function is just different for kasan and khwasan.
Currently for kasan we have:

kasan_poison_shadow(address, size, tag);
if (size & KASAN_SHADOW_MASK) {
        u8 *shadow = (u8 *)kasan_mem_to_shadow(address + size);
        *shadow = size & KASAN_SHADOW_MASK;
}

But what we want to say for khwasan is:

kasan_poison_shadow(address, round_up(size, KASAN_SHADOW_SCALE_SIZE),
get_tag(address));

Not sure if we want to keep a common implementation or just have
separate implementations...


>
> @@ -200,8 +212,9 @@ void kasan_unpoison_stack_above_sp_to(const void *watermark)
>
>  void kasan_alloc_pages(struct page *page, unsigned int order)
>  {
> -       if (likely(!PageHighMem(page)))
> -               kasan_unpoison_shadow(page_address(page), PAGE_SIZE << order);
> +       if (unlikely(PageHighMem(page)))
> +               return;
> +       kasan_unpoison_shadow(page_address(page), PAGE_SIZE << order);
>  }
>
>  void kasan_free_pages(struct page *page, unsigned int order)
> @@ -235,6 +248,7 @@ void kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
>                         slab_flags_t *flags)
>  {
>         unsigned int orig_size = *size;
> +       unsigned int redzone_size = 0;

This variable seems to be always initialized below. We don't general
initialize local variables in this case.

>         int redzone_adjust;
>
>         /* Add alloc meta. */
> @@ -242,20 +256,20 @@ void kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
>         *size += sizeof(struct kasan_alloc_meta);
>
>         /* Add free meta. */
> -       if (cache->flags & SLAB_TYPESAFE_BY_RCU || cache->ctor ||
> -           cache->object_size < sizeof(struct kasan_free_meta)) {
> +       if (IS_ENABLED(CONFIG_KASAN_GENERIC) &&
> +           (cache->flags & SLAB_TYPESAFE_BY_RCU || cache->ctor ||
> +            cache->object_size < sizeof(struct kasan_free_meta))) {
>                 cache->kasan_info.free_meta_offset = *size;
>                 *size += sizeof(struct kasan_free_meta);
>         }
> -       redzone_adjust = optimal_redzone(cache->object_size) -
> -               (*size - cache->object_size);
>
> +       redzone_size = optimal_redzone(cache->object_size);
> +       redzone_adjust = redzone_size - (*size - cache->object_size);
>         if (redzone_adjust > 0)
>                 *size += redzone_adjust;
>
>         *size = min_t(unsigned int, KMALLOC_MAX_SIZE,
> -                       max(*size, cache->object_size +
> -                                       optimal_redzone(cache->object_size)));
> +                       max(*size, cache->object_size + redzone_size));
>
>         /*
>          * If the metadata doesn't fit, don't enable KASAN at all.
> @@ -268,6 +282,8 @@ void kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
>                 return;
>         }
>
> +       cache->align = round_up(cache->align, KASAN_SHADOW_SCALE_SIZE);
> +
>         *flags |= SLAB_KASAN;
>  }
>
> @@ -328,15 +344,30 @@ void *kasan_slab_alloc(struct kmem_cache *cache, void *object, gfp_t flags)
>         return kasan_kmalloc(cache, object, cache->object_size, flags);
>  }
>
> +static inline bool shadow_invalid(u8 tag, s8 shadow_byte)
> +{
> +       if (IS_ENABLED(CONFIG_KASAN_GENERIC))
> +               return shadow_byte < 0 ||
> +                       shadow_byte >= KASAN_SHADOW_SCALE_SIZE;
> +       else
> +               return tag != (u8)shadow_byte;
> +}
> +
>  static bool __kasan_slab_free(struct kmem_cache *cache, void *object,
>                               unsigned long ip, bool quarantine)
>  {
>         s8 shadow_byte;
> +       u8 tag;
> +       void *tagged_object;
>         unsigned long rounded_up_size;
>
> +       tag = get_tag(object);
> +       tagged_object = object;
> +       object = reset_tag(object);
> +
>         if (unlikely(nearest_obj(cache, virt_to_head_page(object), object) !=
>             object)) {
> -               kasan_report_invalid_free(object, ip);
> +               kasan_report_invalid_free(tagged_object, ip);
>                 return true;
>         }
>
> @@ -345,20 +376,22 @@ static bool __kasan_slab_free(struct kmem_cache *cache, void *object,
>                 return false;
>
>         shadow_byte = READ_ONCE(*(s8 *)kasan_mem_to_shadow(object));
> -       if (shadow_byte < 0 || shadow_byte >= KASAN_SHADOW_SCALE_SIZE) {
> -               kasan_report_invalid_free(object, ip);
> +       if (shadow_invalid(tag, shadow_byte)) {
> +               kasan_report_invalid_free(tagged_object, ip);
>                 return true;
>         }
>
>         rounded_up_size = round_up(cache->object_size, KASAN_SHADOW_SCALE_SIZE);
>         kasan_poison_shadow(object, rounded_up_size, KASAN_KMALLOC_FREE);
>
> -       if (!quarantine || unlikely(!(cache->flags & SLAB_KASAN)))
> +       if ((IS_ENABLED(CONFIG_KASAN_GENERIC) && !quarantine) ||
> +                       unlikely(!(cache->flags & SLAB_KASAN)))
>                 return false;
>
>         set_track(&get_alloc_info(cache, object)->free_track, GFP_NOWAIT);
>         quarantine_put(get_free_info(cache, object), cache);
> -       return true;
> +
> +       return IS_ENABLED(CONFIG_KASAN_GENERIC);
>  }
>
>  bool kasan_slab_free(struct kmem_cache *cache, void *object, unsigned long ip)
> @@ -371,6 +404,7 @@ void *kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size,
>  {
>         unsigned long redzone_start;
>         unsigned long redzone_end;
> +       u8 tag;
>
>         if (gfpflags_allow_blocking(flags))
>                 quarantine_reduce();
> @@ -383,14 +417,24 @@ void *kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size,
>         redzone_end = round_up((unsigned long)object + cache->object_size,
>                                 KASAN_SHADOW_SCALE_SIZE);
>
> -       kasan_unpoison_shadow(object, size);
> +       /*
> +        * Objects with contructors and objects from SLAB_TYPESAFE_BY_RCU slabs
> +        * have tags preassigned and are already tagged.
> +        */
> +       if (IS_ENABLED(CONFIG_KASAN_HW) &&
> +                       (cache->ctor || cache->flags & SLAB_TYPESAFE_BY_RCU))
> +               tag = get_tag(object);
> +       else
> +               tag = random_tag();
> +
> +       kasan_unpoison_shadow(set_tag(object, tag), size);
>         kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start,
>                 KASAN_KMALLOC_REDZONE);
>
>         if (cache->flags & SLAB_KASAN)
>                 set_track(&get_alloc_info(cache, object)->alloc_track, flags);
>
> -       return (void *)object;
> +       return set_tag(object, tag);
>  }
>  EXPORT_SYMBOL(kasan_kmalloc);
>
> @@ -440,7 +484,7 @@ void kasan_poison_kfree(void *ptr, unsigned long ip)
>         page = virt_to_head_page(ptr);
>
>         if (unlikely(!PageSlab(page))) {
> -               if (ptr != page_address(page)) {
> +               if (reset_tag(ptr) != page_address(page)) {
>                         kasan_report_invalid_free(ptr, ip);
>                         return;
>                 }
> @@ -453,7 +497,7 @@ void kasan_poison_kfree(void *ptr, unsigned long ip)
>
>  void kasan_kfree_large(void *ptr, unsigned long ip)
>  {
> -       if (ptr != page_address(virt_to_head_page(ptr)))
> +       if (reset_tag(ptr) != page_address(virt_to_head_page(ptr)))
>                 kasan_report_invalid_free(ptr, ip);
>         /* The object will be poisoned by page_alloc. */
>  }
> diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
> index d60859d26be7..6f4f2ebf5f57 100644
> --- a/mm/kasan/kasan.h
> +++ b/mm/kasan/kasan.h
> @@ -12,10 +12,18 @@
>  #define KHWASAN_TAG_INVALID    0xFE /* inaccessible memory tag */
>  #define KHWASAN_TAG_MAX                0xFD /* maximum value for random tags */
>
> +#ifdef CONFIG_KASAN_GENERIC
>  #define KASAN_FREE_PAGE         0xFF  /* page was freed */
>  #define KASAN_PAGE_REDZONE      0xFE  /* redzone for kmalloc_large allocations */
>  #define KASAN_KMALLOC_REDZONE   0xFC  /* redzone inside slub object */
>  #define KASAN_KMALLOC_FREE      0xFB  /* object was freed (kmem_cache_free/kfree) */
> +#else
> +#define KASAN_FREE_PAGE         KHWASAN_TAG_INVALID
> +#define KASAN_PAGE_REDZONE      KHWASAN_TAG_INVALID
> +#define KASAN_KMALLOC_REDZONE   KHWASAN_TAG_INVALID
> +#define KASAN_KMALLOC_FREE      KHWASAN_TAG_INVALID
> +#endif
> +
>  #define KASAN_GLOBAL_REDZONE    0xFA  /* redzone for global variable */
>
>  /*
> diff --git a/mm/kasan/khwasan.c b/mm/kasan/khwasan.c
> index 9d91bf3c8246..6b1309278e39 100644
> --- a/mm/kasan/khwasan.c
> +++ b/mm/kasan/khwasan.c
> @@ -106,15 +106,52 @@ void *khwasan_preset_slab_tag(struct kmem_cache *cache, unsigned int idx,
>  void check_memory_region(unsigned long addr, size_t size, bool write,
>                                 unsigned long ret_ip)
>  {
> +       u8 tag;
> +       u8 *shadow_first, *shadow_last, *shadow;
> +       void *untagged_addr;
> +
> +       tag = get_tag((const void *)addr);
> +
> +       /* Ignore accesses for pointers tagged with 0xff (native kernel

/* on a separate line

> +        * pointer tag) to suppress false positives caused by kmap.
> +        *
> +        * Some kernel code was written to account for archs that don't keep
> +        * high memory mapped all the time, but rather map and unmap particular
> +        * pages when needed. Instead of storing a pointer to the kernel memory,
> +        * this code saves the address of the page structure and offset within
> +        * that page for later use. Those pages are then mapped and unmapped
> +        * with kmap/kunmap when necessary and virt_to_page is used to get the
> +        * virtual address of the page. For arm64 (that keeps the high memory
> +        * mapped all the time), kmap is turned into a page_address call.
> +
> +        * The issue is that with use of the page_address + virt_to_page
> +        * sequence the top byte value of the original pointer gets lost (gets
> +        * set to KHWASAN_TAG_KERNEL (0xFF).

Missed closing bracket.

> +        */
> +       if (tag == KHWASAN_TAG_KERNEL)
> +               return;
> +
> +       untagged_addr = reset_tag((const void *)addr);
> +       shadow_first = kasan_mem_to_shadow(untagged_addr);
> +       shadow_last = kasan_mem_to_shadow(untagged_addr + size - 1);
> +
> +       for (shadow = shadow_first; shadow <= shadow_last; shadow++) {
> +               if (*shadow != tag) {
> +                       kasan_report(addr, size, write, ret_ip);
> +                       return;
> +               }
> +       }
>  }
>
>  #define DEFINE_HWASAN_LOAD_STORE(size)                                 \
>         void __hwasan_load##size##_noabort(unsigned long addr)          \
>         {                                                               \
> +               check_memory_region(addr, size, false, _RET_IP_);       \
>         }                                                               \
>         EXPORT_SYMBOL(__hwasan_load##size##_noabort);                   \
>         void __hwasan_store##size##_noabort(unsigned long addr)         \
>         {                                                               \
> +               check_memory_region(addr, size, true, _RET_IP_);        \
>         }                                                               \
>         EXPORT_SYMBOL(__hwasan_store##size##_noabort)
>
> @@ -126,15 +163,18 @@ DEFINE_HWASAN_LOAD_STORE(16);
>
>  void __hwasan_loadN_noabort(unsigned long addr, unsigned long size)
>  {
> +       check_memory_region(addr, size, false, _RET_IP_);
>  }
>  EXPORT_SYMBOL(__hwasan_loadN_noabort);
>
>  void __hwasan_storeN_noabort(unsigned long addr, unsigned long size)
>  {
> +       check_memory_region(addr, size, true, _RET_IP_);
>  }
>  EXPORT_SYMBOL(__hwasan_storeN_noabort);
>
>  void __hwasan_tag_memory(unsigned long addr, u8 tag, unsigned long size)
>  {
> +       kasan_poison_shadow((void *)addr, size, tag);
>  }
>  EXPORT_SYMBOL(__hwasan_tag_memory);
> --
> 2.19.0.rc0.228.g281dcd1b4d0-goog
>

WARNING: multiple messages have this Message-ID (diff)
From: Dmitry Vyukov <dvyukov@google.com>
To: Andrey Konovalov <andreyknvl@google.com>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>,
	Alexander Potapenko <glider@google.com>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Will Deacon <will.deacon@arm.com>,
	Christoph Lameter <cl@linux.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Mark Rutland <mark.rutland@arm.com>,
	Nick Desaulniers <ndesaulniers@google.com>,
	Marc Zyngier <marc.zyngier@arm.com>,
	Dave Martin <dave.martin@arm.com>,
	Ard Biesheuvel <ard.biesheuvel@linaro.org>,
	"Eric W . Biederman" <ebiederm@xmission.com>,
	Ingo Molnar <mingo@kernel.org>,
	Paul Lawrence <paullawrence@google.com>,
	Geert Uytterhoeven <geert@linux-m68k.org>,
	Arnd Bergmann <arnd@arndb.de>,
	"Kirill A . Shutemov" <kirill.shutemov@linux.intel.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Kate Stewart <kst>
Subject: Re: [PATCH v6 14/18] khwasan: add hooks implementation
Date: Wed, 12 Sep 2018 20:30:32 +0200	[thread overview]
Message-ID: <CACT4Y+YicYhmzrKf84=oJJErdFKSNM70cmoN3m_zzERcUQ_-Fg@mail.gmail.com> (raw)
In-Reply-To: <4267d0903e0fdf9c261b91cf8a2bf0f71047a43c.1535462971.git.andreyknvl@google.com>

On Wed, Aug 29, 2018 at 1:35 PM, Andrey Konovalov <andreyknvl@google.com> wrote:
> This commit adds KHWASAN specific hooks implementation and adjusts
> common KASAN and KHWASAN ones.
>
> 1. When a new slab cache is created, KHWASAN rounds up the size of the
>    objects in this cache to KASAN_SHADOW_SCALE_SIZE (== 16).
>
> 2. On each kmalloc KHWASAN generates a random tag, sets the shadow memory,
>    that corresponds to this object to this tag, and embeds this tag value
>    into the top byte of the returned pointer.
>
> 3. On each kfree KHWASAN poisons the shadow memory with a random tag to
>    allow detection of use-after-free bugs.
>
> The rest of the logic of the hook implementation is very much similar to
> the one provided by KASAN. KHWASAN saves allocation and free stack metadata
> to the slab object the same was KASAN does this.
>
> Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
> ---
>  mm/kasan/common.c  | 82 +++++++++++++++++++++++++++++++++++-----------
>  mm/kasan/kasan.h   |  8 +++++
>  mm/kasan/khwasan.c | 40 ++++++++++++++++++++++
>  3 files changed, 111 insertions(+), 19 deletions(-)
>
> diff --git a/mm/kasan/common.c b/mm/kasan/common.c
> index bed8e13c6e1d..938229b26f3a 100644
> --- a/mm/kasan/common.c
> +++ b/mm/kasan/common.c
> @@ -140,6 +140,9 @@ void kasan_poison_shadow(const void *address, size_t size, u8 value)
>  {
>         void *shadow_start, *shadow_end;
>
> +       /* Perform shadow offset calculation based on untagged address */
> +       address = reset_tag(address);
> +
>         shadow_start = kasan_mem_to_shadow(address);
>         shadow_end = kasan_mem_to_shadow(address + size);
>
> @@ -148,11 +151,20 @@ void kasan_poison_shadow(const void *address, size_t size, u8 value)
>
>  void kasan_unpoison_shadow(const void *address, size_t size)
>  {
> -       kasan_poison_shadow(address, size, 0);
> +       u8 tag = get_tag(address);
> +
> +       /* Perform shadow offset calculation based on untagged address */

The comment is not super-useful. It would be more useful to say why we
need to do this.
Most callers explicitly untag pointer passed to this function, for
some it's unclear if the pointer contains tag or not.
For example, __hwasan_tag_memory -- what does it accept? Tagged or untagged?


> +       address = reset_tag(address);
> +
> +       kasan_poison_shadow(address, size, tag);
>
>         if (size & KASAN_SHADOW_MASK) {
>                 u8 *shadow = (u8 *)kasan_mem_to_shadow(address + size);
> -               *shadow = size & KASAN_SHADOW_MASK;
> +
> +               if (IS_ENABLED(CONFIG_KASAN_HW))
> +                       *shadow = tag;
> +               else
> +                       *shadow = size & KASAN_SHADOW_MASK;
>         }
>  }


It seems that this function is just different for kasan and khwasan.
Currently for kasan we have:

kasan_poison_shadow(address, size, tag);
if (size & KASAN_SHADOW_MASK) {
        u8 *shadow = (u8 *)kasan_mem_to_shadow(address + size);
        *shadow = size & KASAN_SHADOW_MASK;
}

But what we want to say for khwasan is:

kasan_poison_shadow(address, round_up(size, KASAN_SHADOW_SCALE_SIZE),
get_tag(address));

Not sure if we want to keep a common implementation or just have
separate implementations...


>
> @@ -200,8 +212,9 @@ void kasan_unpoison_stack_above_sp_to(const void *watermark)
>
>  void kasan_alloc_pages(struct page *page, unsigned int order)
>  {
> -       if (likely(!PageHighMem(page)))
> -               kasan_unpoison_shadow(page_address(page), PAGE_SIZE << order);
> +       if (unlikely(PageHighMem(page)))
> +               return;
> +       kasan_unpoison_shadow(page_address(page), PAGE_SIZE << order);
>  }
>
>  void kasan_free_pages(struct page *page, unsigned int order)
> @@ -235,6 +248,7 @@ void kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
>                         slab_flags_t *flags)
>  {
>         unsigned int orig_size = *size;
> +       unsigned int redzone_size = 0;

This variable seems to be always initialized below. We don't general
initialize local variables in this case.

>         int redzone_adjust;
>
>         /* Add alloc meta. */
> @@ -242,20 +256,20 @@ void kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
>         *size += sizeof(struct kasan_alloc_meta);
>
>         /* Add free meta. */
> -       if (cache->flags & SLAB_TYPESAFE_BY_RCU || cache->ctor ||
> -           cache->object_size < sizeof(struct kasan_free_meta)) {
> +       if (IS_ENABLED(CONFIG_KASAN_GENERIC) &&
> +           (cache->flags & SLAB_TYPESAFE_BY_RCU || cache->ctor ||
> +            cache->object_size < sizeof(struct kasan_free_meta))) {
>                 cache->kasan_info.free_meta_offset = *size;
>                 *size += sizeof(struct kasan_free_meta);
>         }
> -       redzone_adjust = optimal_redzone(cache->object_size) -
> -               (*size - cache->object_size);
>
> +       redzone_size = optimal_redzone(cache->object_size);
> +       redzone_adjust = redzone_size - (*size - cache->object_size);
>         if (redzone_adjust > 0)
>                 *size += redzone_adjust;
>
>         *size = min_t(unsigned int, KMALLOC_MAX_SIZE,
> -                       max(*size, cache->object_size +
> -                                       optimal_redzone(cache->object_size)));
> +                       max(*size, cache->object_size + redzone_size));
>
>         /*
>          * If the metadata doesn't fit, don't enable KASAN at all.
> @@ -268,6 +282,8 @@ void kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
>                 return;
>         }
>
> +       cache->align = round_up(cache->align, KASAN_SHADOW_SCALE_SIZE);
> +
>         *flags |= SLAB_KASAN;
>  }
>
> @@ -328,15 +344,30 @@ void *kasan_slab_alloc(struct kmem_cache *cache, void *object, gfp_t flags)
>         return kasan_kmalloc(cache, object, cache->object_size, flags);
>  }
>
> +static inline bool shadow_invalid(u8 tag, s8 shadow_byte)
> +{
> +       if (IS_ENABLED(CONFIG_KASAN_GENERIC))
> +               return shadow_byte < 0 ||
> +                       shadow_byte >= KASAN_SHADOW_SCALE_SIZE;
> +       else
> +               return tag != (u8)shadow_byte;
> +}
> +
>  static bool __kasan_slab_free(struct kmem_cache *cache, void *object,
>                               unsigned long ip, bool quarantine)
>  {
>         s8 shadow_byte;
> +       u8 tag;
> +       void *tagged_object;
>         unsigned long rounded_up_size;
>
> +       tag = get_tag(object);
> +       tagged_object = object;
> +       object = reset_tag(object);
> +
>         if (unlikely(nearest_obj(cache, virt_to_head_page(object), object) !=
>             object)) {
> -               kasan_report_invalid_free(object, ip);
> +               kasan_report_invalid_free(tagged_object, ip);
>                 return true;
>         }
>
> @@ -345,20 +376,22 @@ static bool __kasan_slab_free(struct kmem_cache *cache, void *object,
>                 return false;
>
>         shadow_byte = READ_ONCE(*(s8 *)kasan_mem_to_shadow(object));
> -       if (shadow_byte < 0 || shadow_byte >= KASAN_SHADOW_SCALE_SIZE) {
> -               kasan_report_invalid_free(object, ip);
> +       if (shadow_invalid(tag, shadow_byte)) {
> +               kasan_report_invalid_free(tagged_object, ip);
>                 return true;
>         }
>
>         rounded_up_size = round_up(cache->object_size, KASAN_SHADOW_SCALE_SIZE);
>         kasan_poison_shadow(object, rounded_up_size, KASAN_KMALLOC_FREE);
>
> -       if (!quarantine || unlikely(!(cache->flags & SLAB_KASAN)))
> +       if ((IS_ENABLED(CONFIG_KASAN_GENERIC) && !quarantine) ||
> +                       unlikely(!(cache->flags & SLAB_KASAN)))
>                 return false;
>
>         set_track(&get_alloc_info(cache, object)->free_track, GFP_NOWAIT);
>         quarantine_put(get_free_info(cache, object), cache);
> -       return true;
> +
> +       return IS_ENABLED(CONFIG_KASAN_GENERIC);
>  }
>
>  bool kasan_slab_free(struct kmem_cache *cache, void *object, unsigned long ip)
> @@ -371,6 +404,7 @@ void *kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size,
>  {
>         unsigned long redzone_start;
>         unsigned long redzone_end;
> +       u8 tag;
>
>         if (gfpflags_allow_blocking(flags))
>                 quarantine_reduce();
> @@ -383,14 +417,24 @@ void *kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size,
>         redzone_end = round_up((unsigned long)object + cache->object_size,
>                                 KASAN_SHADOW_SCALE_SIZE);
>
> -       kasan_unpoison_shadow(object, size);
> +       /*
> +        * Objects with contructors and objects from SLAB_TYPESAFE_BY_RCU slabs
> +        * have tags preassigned and are already tagged.
> +        */
> +       if (IS_ENABLED(CONFIG_KASAN_HW) &&
> +                       (cache->ctor || cache->flags & SLAB_TYPESAFE_BY_RCU))
> +               tag = get_tag(object);
> +       else
> +               tag = random_tag();
> +
> +       kasan_unpoison_shadow(set_tag(object, tag), size);
>         kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start,
>                 KASAN_KMALLOC_REDZONE);
>
>         if (cache->flags & SLAB_KASAN)
>                 set_track(&get_alloc_info(cache, object)->alloc_track, flags);
>
> -       return (void *)object;
> +       return set_tag(object, tag);
>  }
>  EXPORT_SYMBOL(kasan_kmalloc);
>
> @@ -440,7 +484,7 @@ void kasan_poison_kfree(void *ptr, unsigned long ip)
>         page = virt_to_head_page(ptr);
>
>         if (unlikely(!PageSlab(page))) {
> -               if (ptr != page_address(page)) {
> +               if (reset_tag(ptr) != page_address(page)) {
>                         kasan_report_invalid_free(ptr, ip);
>                         return;
>                 }
> @@ -453,7 +497,7 @@ void kasan_poison_kfree(void *ptr, unsigned long ip)
>
>  void kasan_kfree_large(void *ptr, unsigned long ip)
>  {
> -       if (ptr != page_address(virt_to_head_page(ptr)))
> +       if (reset_tag(ptr) != page_address(virt_to_head_page(ptr)))
>                 kasan_report_invalid_free(ptr, ip);
>         /* The object will be poisoned by page_alloc. */
>  }
> diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
> index d60859d26be7..6f4f2ebf5f57 100644
> --- a/mm/kasan/kasan.h
> +++ b/mm/kasan/kasan.h
> @@ -12,10 +12,18 @@
>  #define KHWASAN_TAG_INVALID    0xFE /* inaccessible memory tag */
>  #define KHWASAN_TAG_MAX                0xFD /* maximum value for random tags */
>
> +#ifdef CONFIG_KASAN_GENERIC
>  #define KASAN_FREE_PAGE         0xFF  /* page was freed */
>  #define KASAN_PAGE_REDZONE      0xFE  /* redzone for kmalloc_large allocations */
>  #define KASAN_KMALLOC_REDZONE   0xFC  /* redzone inside slub object */
>  #define KASAN_KMALLOC_FREE      0xFB  /* object was freed (kmem_cache_free/kfree) */
> +#else
> +#define KASAN_FREE_PAGE         KHWASAN_TAG_INVALID
> +#define KASAN_PAGE_REDZONE      KHWASAN_TAG_INVALID
> +#define KASAN_KMALLOC_REDZONE   KHWASAN_TAG_INVALID
> +#define KASAN_KMALLOC_FREE      KHWASAN_TAG_INVALID
> +#endif
> +
>  #define KASAN_GLOBAL_REDZONE    0xFA  /* redzone for global variable */
>
>  /*
> diff --git a/mm/kasan/khwasan.c b/mm/kasan/khwasan.c
> index 9d91bf3c8246..6b1309278e39 100644
> --- a/mm/kasan/khwasan.c
> +++ b/mm/kasan/khwasan.c
> @@ -106,15 +106,52 @@ void *khwasan_preset_slab_tag(struct kmem_cache *cache, unsigned int idx,
>  void check_memory_region(unsigned long addr, size_t size, bool write,
>                                 unsigned long ret_ip)
>  {
> +       u8 tag;
> +       u8 *shadow_first, *shadow_last, *shadow;
> +       void *untagged_addr;
> +
> +       tag = get_tag((const void *)addr);
> +
> +       /* Ignore accesses for pointers tagged with 0xff (native kernel

/* on a separate line

> +        * pointer tag) to suppress false positives caused by kmap.
> +        *
> +        * Some kernel code was written to account for archs that don't keep
> +        * high memory mapped all the time, but rather map and unmap particular
> +        * pages when needed. Instead of storing a pointer to the kernel memory,
> +        * this code saves the address of the page structure and offset within
> +        * that page for later use. Those pages are then mapped and unmapped
> +        * with kmap/kunmap when necessary and virt_to_page is used to get the
> +        * virtual address of the page. For arm64 (that keeps the high memory
> +        * mapped all the time), kmap is turned into a page_address call.
> +
> +        * The issue is that with use of the page_address + virt_to_page
> +        * sequence the top byte value of the original pointer gets lost (gets
> +        * set to KHWASAN_TAG_KERNEL (0xFF).

Missed closing bracket.

> +        */
> +       if (tag == KHWASAN_TAG_KERNEL)
> +               return;
> +
> +       untagged_addr = reset_tag((const void *)addr);
> +       shadow_first = kasan_mem_to_shadow(untagged_addr);
> +       shadow_last = kasan_mem_to_shadow(untagged_addr + size - 1);
> +
> +       for (shadow = shadow_first; shadow <= shadow_last; shadow++) {
> +               if (*shadow != tag) {
> +                       kasan_report(addr, size, write, ret_ip);
> +                       return;
> +               }
> +       }
>  }
>
>  #define DEFINE_HWASAN_LOAD_STORE(size)                                 \
>         void __hwasan_load##size##_noabort(unsigned long addr)          \
>         {                                                               \
> +               check_memory_region(addr, size, false, _RET_IP_);       \
>         }                                                               \
>         EXPORT_SYMBOL(__hwasan_load##size##_noabort);                   \
>         void __hwasan_store##size##_noabort(unsigned long addr)         \
>         {                                                               \
> +               check_memory_region(addr, size, true, _RET_IP_);        \
>         }                                                               \
>         EXPORT_SYMBOL(__hwasan_store##size##_noabort)
>
> @@ -126,15 +163,18 @@ DEFINE_HWASAN_LOAD_STORE(16);
>
>  void __hwasan_loadN_noabort(unsigned long addr, unsigned long size)
>  {
> +       check_memory_region(addr, size, false, _RET_IP_);
>  }
>  EXPORT_SYMBOL(__hwasan_loadN_noabort);
>
>  void __hwasan_storeN_noabort(unsigned long addr, unsigned long size)
>  {
> +       check_memory_region(addr, size, true, _RET_IP_);
>  }
>  EXPORT_SYMBOL(__hwasan_storeN_noabort);
>
>  void __hwasan_tag_memory(unsigned long addr, u8 tag, unsigned long size)
>  {
> +       kasan_poison_shadow((void *)addr, size, tag);
>  }
>  EXPORT_SYMBOL(__hwasan_tag_memory);
> --
> 2.19.0.rc0.228.g281dcd1b4d0-goog
>

WARNING: multiple messages have this Message-ID (diff)
From: dvyukov@google.com (Dmitry Vyukov)
To: linux-arm-kernel@lists.infradead.org
Subject: [PATCH v6 14/18] khwasan: add hooks implementation
Date: Wed, 12 Sep 2018 20:30:32 +0200	[thread overview]
Message-ID: <CACT4Y+YicYhmzrKf84=oJJErdFKSNM70cmoN3m_zzERcUQ_-Fg@mail.gmail.com> (raw)
In-Reply-To: <4267d0903e0fdf9c261b91cf8a2bf0f71047a43c.1535462971.git.andreyknvl@google.com>

On Wed, Aug 29, 2018 at 1:35 PM, Andrey Konovalov <andreyknvl@google.com> wrote:
> This commit adds KHWASAN specific hooks implementation and adjusts
> common KASAN and KHWASAN ones.
>
> 1. When a new slab cache is created, KHWASAN rounds up the size of the
>    objects in this cache to KASAN_SHADOW_SCALE_SIZE (== 16).
>
> 2. On each kmalloc KHWASAN generates a random tag, sets the shadow memory,
>    that corresponds to this object to this tag, and embeds this tag value
>    into the top byte of the returned pointer.
>
> 3. On each kfree KHWASAN poisons the shadow memory with a random tag to
>    allow detection of use-after-free bugs.
>
> The rest of the logic of the hook implementation is very much similar to
> the one provided by KASAN. KHWASAN saves allocation and free stack metadata
> to the slab object the same was KASAN does this.
>
> Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
> ---
>  mm/kasan/common.c  | 82 +++++++++++++++++++++++++++++++++++-----------
>  mm/kasan/kasan.h   |  8 +++++
>  mm/kasan/khwasan.c | 40 ++++++++++++++++++++++
>  3 files changed, 111 insertions(+), 19 deletions(-)
>
> diff --git a/mm/kasan/common.c b/mm/kasan/common.c
> index bed8e13c6e1d..938229b26f3a 100644
> --- a/mm/kasan/common.c
> +++ b/mm/kasan/common.c
> @@ -140,6 +140,9 @@ void kasan_poison_shadow(const void *address, size_t size, u8 value)
>  {
>         void *shadow_start, *shadow_end;
>
> +       /* Perform shadow offset calculation based on untagged address */
> +       address = reset_tag(address);
> +
>         shadow_start = kasan_mem_to_shadow(address);
>         shadow_end = kasan_mem_to_shadow(address + size);
>
> @@ -148,11 +151,20 @@ void kasan_poison_shadow(const void *address, size_t size, u8 value)
>
>  void kasan_unpoison_shadow(const void *address, size_t size)
>  {
> -       kasan_poison_shadow(address, size, 0);
> +       u8 tag = get_tag(address);
> +
> +       /* Perform shadow offset calculation based on untagged address */

The comment is not super-useful. It would be more useful to say why we
need to do this.
Most callers explicitly untag pointer passed to this function, for
some it's unclear if the pointer contains tag or not.
For example, __hwasan_tag_memory -- what does it accept? Tagged or untagged?


> +       address = reset_tag(address);
> +
> +       kasan_poison_shadow(address, size, tag);
>
>         if (size & KASAN_SHADOW_MASK) {
>                 u8 *shadow = (u8 *)kasan_mem_to_shadow(address + size);
> -               *shadow = size & KASAN_SHADOW_MASK;
> +
> +               if (IS_ENABLED(CONFIG_KASAN_HW))
> +                       *shadow = tag;
> +               else
> +                       *shadow = size & KASAN_SHADOW_MASK;
>         }
>  }


It seems that this function is just different for kasan and khwasan.
Currently for kasan we have:

kasan_poison_shadow(address, size, tag);
if (size & KASAN_SHADOW_MASK) {
        u8 *shadow = (u8 *)kasan_mem_to_shadow(address + size);
        *shadow = size & KASAN_SHADOW_MASK;
}

But what we want to say for khwasan is:

kasan_poison_shadow(address, round_up(size, KASAN_SHADOW_SCALE_SIZE),
get_tag(address));

Not sure if we want to keep a common implementation or just have
separate implementations...


>
> @@ -200,8 +212,9 @@ void kasan_unpoison_stack_above_sp_to(const void *watermark)
>
>  void kasan_alloc_pages(struct page *page, unsigned int order)
>  {
> -       if (likely(!PageHighMem(page)))
> -               kasan_unpoison_shadow(page_address(page), PAGE_SIZE << order);
> +       if (unlikely(PageHighMem(page)))
> +               return;
> +       kasan_unpoison_shadow(page_address(page), PAGE_SIZE << order);
>  }
>
>  void kasan_free_pages(struct page *page, unsigned int order)
> @@ -235,6 +248,7 @@ void kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
>                         slab_flags_t *flags)
>  {
>         unsigned int orig_size = *size;
> +       unsigned int redzone_size = 0;

This variable seems to be always initialized below. We don't general
initialize local variables in this case.

>         int redzone_adjust;
>
>         /* Add alloc meta. */
> @@ -242,20 +256,20 @@ void kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
>         *size += sizeof(struct kasan_alloc_meta);
>
>         /* Add free meta. */
> -       if (cache->flags & SLAB_TYPESAFE_BY_RCU || cache->ctor ||
> -           cache->object_size < sizeof(struct kasan_free_meta)) {
> +       if (IS_ENABLED(CONFIG_KASAN_GENERIC) &&
> +           (cache->flags & SLAB_TYPESAFE_BY_RCU || cache->ctor ||
> +            cache->object_size < sizeof(struct kasan_free_meta))) {
>                 cache->kasan_info.free_meta_offset = *size;
>                 *size += sizeof(struct kasan_free_meta);
>         }
> -       redzone_adjust = optimal_redzone(cache->object_size) -
> -               (*size - cache->object_size);
>
> +       redzone_size = optimal_redzone(cache->object_size);
> +       redzone_adjust = redzone_size - (*size - cache->object_size);
>         if (redzone_adjust > 0)
>                 *size += redzone_adjust;
>
>         *size = min_t(unsigned int, KMALLOC_MAX_SIZE,
> -                       max(*size, cache->object_size +
> -                                       optimal_redzone(cache->object_size)));
> +                       max(*size, cache->object_size + redzone_size));
>
>         /*
>          * If the metadata doesn't fit, don't enable KASAN at all.
> @@ -268,6 +282,8 @@ void kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
>                 return;
>         }
>
> +       cache->align = round_up(cache->align, KASAN_SHADOW_SCALE_SIZE);
> +
>         *flags |= SLAB_KASAN;
>  }
>
> @@ -328,15 +344,30 @@ void *kasan_slab_alloc(struct kmem_cache *cache, void *object, gfp_t flags)
>         return kasan_kmalloc(cache, object, cache->object_size, flags);
>  }
>
> +static inline bool shadow_invalid(u8 tag, s8 shadow_byte)
> +{
> +       if (IS_ENABLED(CONFIG_KASAN_GENERIC))
> +               return shadow_byte < 0 ||
> +                       shadow_byte >= KASAN_SHADOW_SCALE_SIZE;
> +       else
> +               return tag != (u8)shadow_byte;
> +}
> +
>  static bool __kasan_slab_free(struct kmem_cache *cache, void *object,
>                               unsigned long ip, bool quarantine)
>  {
>         s8 shadow_byte;
> +       u8 tag;
> +       void *tagged_object;
>         unsigned long rounded_up_size;
>
> +       tag = get_tag(object);
> +       tagged_object = object;
> +       object = reset_tag(object);
> +
>         if (unlikely(nearest_obj(cache, virt_to_head_page(object), object) !=
>             object)) {
> -               kasan_report_invalid_free(object, ip);
> +               kasan_report_invalid_free(tagged_object, ip);
>                 return true;
>         }
>
> @@ -345,20 +376,22 @@ static bool __kasan_slab_free(struct kmem_cache *cache, void *object,
>                 return false;
>
>         shadow_byte = READ_ONCE(*(s8 *)kasan_mem_to_shadow(object));
> -       if (shadow_byte < 0 || shadow_byte >= KASAN_SHADOW_SCALE_SIZE) {
> -               kasan_report_invalid_free(object, ip);
> +       if (shadow_invalid(tag, shadow_byte)) {
> +               kasan_report_invalid_free(tagged_object, ip);
>                 return true;
>         }
>
>         rounded_up_size = round_up(cache->object_size, KASAN_SHADOW_SCALE_SIZE);
>         kasan_poison_shadow(object, rounded_up_size, KASAN_KMALLOC_FREE);
>
> -       if (!quarantine || unlikely(!(cache->flags & SLAB_KASAN)))
> +       if ((IS_ENABLED(CONFIG_KASAN_GENERIC) && !quarantine) ||
> +                       unlikely(!(cache->flags & SLAB_KASAN)))
>                 return false;
>
>         set_track(&get_alloc_info(cache, object)->free_track, GFP_NOWAIT);
>         quarantine_put(get_free_info(cache, object), cache);
> -       return true;
> +
> +       return IS_ENABLED(CONFIG_KASAN_GENERIC);
>  }
>
>  bool kasan_slab_free(struct kmem_cache *cache, void *object, unsigned long ip)
> @@ -371,6 +404,7 @@ void *kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size,
>  {
>         unsigned long redzone_start;
>         unsigned long redzone_end;
> +       u8 tag;
>
>         if (gfpflags_allow_blocking(flags))
>                 quarantine_reduce();
> @@ -383,14 +417,24 @@ void *kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size,
>         redzone_end = round_up((unsigned long)object + cache->object_size,
>                                 KASAN_SHADOW_SCALE_SIZE);
>
> -       kasan_unpoison_shadow(object, size);
> +       /*
> +        * Objects with contructors and objects from SLAB_TYPESAFE_BY_RCU slabs
> +        * have tags preassigned and are already tagged.
> +        */
> +       if (IS_ENABLED(CONFIG_KASAN_HW) &&
> +                       (cache->ctor || cache->flags & SLAB_TYPESAFE_BY_RCU))
> +               tag = get_tag(object);
> +       else
> +               tag = random_tag();
> +
> +       kasan_unpoison_shadow(set_tag(object, tag), size);
>         kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start,
>                 KASAN_KMALLOC_REDZONE);
>
>         if (cache->flags & SLAB_KASAN)
>                 set_track(&get_alloc_info(cache, object)->alloc_track, flags);
>
> -       return (void *)object;
> +       return set_tag(object, tag);
>  }
>  EXPORT_SYMBOL(kasan_kmalloc);
>
> @@ -440,7 +484,7 @@ void kasan_poison_kfree(void *ptr, unsigned long ip)
>         page = virt_to_head_page(ptr);
>
>         if (unlikely(!PageSlab(page))) {
> -               if (ptr != page_address(page)) {
> +               if (reset_tag(ptr) != page_address(page)) {
>                         kasan_report_invalid_free(ptr, ip);
>                         return;
>                 }
> @@ -453,7 +497,7 @@ void kasan_poison_kfree(void *ptr, unsigned long ip)
>
>  void kasan_kfree_large(void *ptr, unsigned long ip)
>  {
> -       if (ptr != page_address(virt_to_head_page(ptr)))
> +       if (reset_tag(ptr) != page_address(virt_to_head_page(ptr)))
>                 kasan_report_invalid_free(ptr, ip);
>         /* The object will be poisoned by page_alloc. */
>  }
> diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
> index d60859d26be7..6f4f2ebf5f57 100644
> --- a/mm/kasan/kasan.h
> +++ b/mm/kasan/kasan.h
> @@ -12,10 +12,18 @@
>  #define KHWASAN_TAG_INVALID    0xFE /* inaccessible memory tag */
>  #define KHWASAN_TAG_MAX                0xFD /* maximum value for random tags */
>
> +#ifdef CONFIG_KASAN_GENERIC
>  #define KASAN_FREE_PAGE         0xFF  /* page was freed */
>  #define KASAN_PAGE_REDZONE      0xFE  /* redzone for kmalloc_large allocations */
>  #define KASAN_KMALLOC_REDZONE   0xFC  /* redzone inside slub object */
>  #define KASAN_KMALLOC_FREE      0xFB  /* object was freed (kmem_cache_free/kfree) */
> +#else
> +#define KASAN_FREE_PAGE         KHWASAN_TAG_INVALID
> +#define KASAN_PAGE_REDZONE      KHWASAN_TAG_INVALID
> +#define KASAN_KMALLOC_REDZONE   KHWASAN_TAG_INVALID
> +#define KASAN_KMALLOC_FREE      KHWASAN_TAG_INVALID
> +#endif
> +
>  #define KASAN_GLOBAL_REDZONE    0xFA  /* redzone for global variable */
>
>  /*
> diff --git a/mm/kasan/khwasan.c b/mm/kasan/khwasan.c
> index 9d91bf3c8246..6b1309278e39 100644
> --- a/mm/kasan/khwasan.c
> +++ b/mm/kasan/khwasan.c
> @@ -106,15 +106,52 @@ void *khwasan_preset_slab_tag(struct kmem_cache *cache, unsigned int idx,
>  void check_memory_region(unsigned long addr, size_t size, bool write,
>                                 unsigned long ret_ip)
>  {
> +       u8 tag;
> +       u8 *shadow_first, *shadow_last, *shadow;
> +       void *untagged_addr;
> +
> +       tag = get_tag((const void *)addr);
> +
> +       /* Ignore accesses for pointers tagged with 0xff (native kernel

/* on a separate line

> +        * pointer tag) to suppress false positives caused by kmap.
> +        *
> +        * Some kernel code was written to account for archs that don't keep
> +        * high memory mapped all the time, but rather map and unmap particular
> +        * pages when needed. Instead of storing a pointer to the kernel memory,
> +        * this code saves the address of the page structure and offset within
> +        * that page for later use. Those pages are then mapped and unmapped
> +        * with kmap/kunmap when necessary and virt_to_page is used to get the
> +        * virtual address of the page. For arm64 (that keeps the high memory
> +        * mapped all the time), kmap is turned into a page_address call.
> +
> +        * The issue is that with use of the page_address + virt_to_page
> +        * sequence the top byte value of the original pointer gets lost (gets
> +        * set to KHWASAN_TAG_KERNEL (0xFF).

Missed closing bracket.

> +        */
> +       if (tag == KHWASAN_TAG_KERNEL)
> +               return;
> +
> +       untagged_addr = reset_tag((const void *)addr);
> +       shadow_first = kasan_mem_to_shadow(untagged_addr);
> +       shadow_last = kasan_mem_to_shadow(untagged_addr + size - 1);
> +
> +       for (shadow = shadow_first; shadow <= shadow_last; shadow++) {
> +               if (*shadow != tag) {
> +                       kasan_report(addr, size, write, ret_ip);
> +                       return;
> +               }
> +       }
>  }
>
>  #define DEFINE_HWASAN_LOAD_STORE(size)                                 \
>         void __hwasan_load##size##_noabort(unsigned long addr)          \
>         {                                                               \
> +               check_memory_region(addr, size, false, _RET_IP_);       \
>         }                                                               \
>         EXPORT_SYMBOL(__hwasan_load##size##_noabort);                   \
>         void __hwasan_store##size##_noabort(unsigned long addr)         \
>         {                                                               \
> +               check_memory_region(addr, size, true, _RET_IP_);        \
>         }                                                               \
>         EXPORT_SYMBOL(__hwasan_store##size##_noabort)
>
> @@ -126,15 +163,18 @@ DEFINE_HWASAN_LOAD_STORE(16);
>
>  void __hwasan_loadN_noabort(unsigned long addr, unsigned long size)
>  {
> +       check_memory_region(addr, size, false, _RET_IP_);
>  }
>  EXPORT_SYMBOL(__hwasan_loadN_noabort);
>
>  void __hwasan_storeN_noabort(unsigned long addr, unsigned long size)
>  {
> +       check_memory_region(addr, size, true, _RET_IP_);
>  }
>  EXPORT_SYMBOL(__hwasan_storeN_noabort);
>
>  void __hwasan_tag_memory(unsigned long addr, u8 tag, unsigned long size)
>  {
> +       kasan_poison_shadow((void *)addr, size, tag);
>  }
>  EXPORT_SYMBOL(__hwasan_tag_memory);
> --
> 2.19.0.rc0.228.g281dcd1b4d0-goog
>

  reply	other threads:[~2018-09-12 18:30 UTC|newest]

Thread overview: 148+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-08-29 11:35 [PATCH v6 00/18] khwasan: kernel hardware assisted address sanitizer Andrey Konovalov
2018-08-29 11:35 ` Andrey Konovalov
2018-08-29 11:35 ` Andrey Konovalov
2018-08-29 11:35 ` [PATCH v6 01/18] khwasan, mm: change kasan hooks signatures Andrey Konovalov
2018-08-29 11:35   ` Andrey Konovalov
2018-08-29 11:35 ` [PATCH v6 02/18] khwasan: move common kasan and khwasan code to common.c Andrey Konovalov
2018-08-29 11:35   ` Andrey Konovalov
2018-08-29 11:35 ` [PATCH v6 03/18] khwasan: add CONFIG_KASAN_GENERIC and CONFIG_KASAN_HW Andrey Konovalov
2018-08-29 11:35   ` Andrey Konovalov
2018-09-12 14:47   ` Dmitry Vyukov
2018-09-12 14:47     ` Dmitry Vyukov
2018-09-12 14:47     ` Dmitry Vyukov
2018-09-12 14:51     ` Dmitry Vyukov
2018-09-12 14:51       ` Dmitry Vyukov
2018-09-12 14:51       ` Dmitry Vyukov
2018-09-17 18:42     ` Andrey Konovalov
2018-09-17 18:42       ` Andrey Konovalov
2018-09-17 18:42       ` Andrey Konovalov
2018-08-29 11:35 ` [PATCH v6 04/18] khwasan, arm64: adjust shadow size for CONFIG_KASAN_HW Andrey Konovalov
2018-08-29 11:35   ` Andrey Konovalov
2018-09-12 14:54   ` Dmitry Vyukov
2018-09-12 14:54     ` Dmitry Vyukov
2018-09-12 14:54     ` Dmitry Vyukov
2018-09-19 17:27     ` Andrey Konovalov
2018-09-19 17:27       ` Andrey Konovalov
2018-09-19 17:27       ` Andrey Konovalov
2018-08-29 11:35 ` [PATCH v6 05/18] khwasan: initialize shadow to 0xff Andrey Konovalov
2018-08-29 11:35   ` Andrey Konovalov
2018-08-29 11:35 ` [PATCH v6 06/18] khwasan, arm64: untag virt address in __kimg_to_phys and _virt_addr_is_linear Andrey Konovalov
2018-08-29 11:35   ` Andrey Konovalov
2018-09-12 16:33   ` Dmitry Vyukov
2018-09-12 16:33     ` Dmitry Vyukov
2018-09-12 16:33     ` Dmitry Vyukov
2018-09-18 17:09     ` Andrey Konovalov
2018-09-18 17:09       ` Andrey Konovalov
2018-09-18 17:09       ` Andrey Konovalov
2018-08-29 11:35 ` [PATCH v6 07/18] khwasan: add tag related helper functions Andrey Konovalov
2018-08-29 11:35   ` Andrey Konovalov
2018-08-29 11:35   ` Andrey Konovalov
2018-09-12 16:21   ` Dmitry Vyukov
2018-09-12 16:21     ` Dmitry Vyukov
2018-09-12 16:21     ` Dmitry Vyukov
2018-09-17 18:59     ` Andrey Konovalov
2018-09-17 18:59       ` Andrey Konovalov
2018-09-17 18:59       ` Andrey Konovalov
2018-09-18 15:45       ` Dmitry Vyukov
2018-09-18 15:45         ` Dmitry Vyukov
2018-09-18 15:45         ` Dmitry Vyukov
2018-08-29 11:35 ` [PATCH v6 08/18] khwasan: preassign tags to objects with ctors or SLAB_TYPESAFE_BY_RCU Andrey Konovalov
2018-08-29 11:35   ` Andrey Konovalov
2018-09-04 15:17   ` Christopher Lameter
2018-09-12 16:36   ` Dmitry Vyukov
2018-09-12 16:36     ` Dmitry Vyukov
2018-09-12 16:36     ` Dmitry Vyukov
2018-09-18 16:50     ` Andrey Konovalov
2018-09-18 16:50       ` Andrey Konovalov
2018-09-18 16:50       ` Andrey Konovalov
2018-08-29 11:35 ` [PATCH v6 09/18] khwasan, arm64: fix up fault handling logic Andrey Konovalov
2018-08-29 11:35   ` Andrey Konovalov
2018-08-29 11:35 ` [PATCH v6 10/18] khwasan, arm64: enable top byte ignore for the kernel Andrey Konovalov
2018-08-29 11:35   ` Andrey Konovalov
2018-08-29 11:35 ` [PATCH v6 11/18] khwasan, mm: perform untagged pointers comparison in krealloc Andrey Konovalov
2018-08-29 11:35   ` Andrey Konovalov
2018-09-04 15:18   ` Christopher Lameter
2018-08-29 11:35 ` [PATCH v6 12/18] khwasan: split out kasan_report.c from report.c Andrey Konovalov
2018-08-29 11:35   ` Andrey Konovalov
2018-08-29 11:35 ` [PATCH v6 13/18] khwasan: add bug reporting routines Andrey Konovalov
2018-08-29 11:35   ` Andrey Konovalov
2018-09-12 17:50   ` Dmitry Vyukov
2018-09-12 17:50     ` Dmitry Vyukov
2018-09-12 17:50     ` Dmitry Vyukov
2018-09-18 17:36     ` Andrey Konovalov
2018-09-18 17:36       ` Andrey Konovalov
2018-09-18 17:36       ` Andrey Konovalov
2018-08-29 11:35 ` [PATCH v6 14/18] khwasan: add hooks implementation Andrey Konovalov
2018-08-29 11:35   ` Andrey Konovalov
2018-09-12 18:30   ` Dmitry Vyukov [this message]
2018-09-12 18:30     ` Dmitry Vyukov
2018-09-12 18:30     ` Dmitry Vyukov
2018-09-19 11:54     ` Andrey Konovalov
2018-09-19 11:54       ` Andrey Konovalov
2018-09-19 11:54       ` Andrey Konovalov
2018-08-29 11:35 ` [PATCH v6 15/18] khwasan, arm64: add brk handler for inline instrumentation Andrey Konovalov
2018-08-29 11:35   ` Andrey Konovalov
2018-09-12 17:13   ` Dmitry Vyukov
2018-09-12 17:13     ` Dmitry Vyukov
2018-09-12 17:13     ` Dmitry Vyukov
2018-09-17 19:12     ` Andrey Konovalov
2018-09-17 19:12       ` Andrey Konovalov
2018-09-17 19:12       ` Andrey Konovalov
2018-09-12 17:15   ` Dmitry Vyukov
2018-09-12 17:15     ` Dmitry Vyukov
2018-09-12 17:15     ` Dmitry Vyukov
2018-09-12 17:39     ` Jann Horn
2018-09-12 17:39       ` Jann Horn
2018-09-12 17:39       ` Jann Horn
2018-09-13  8:37       ` Dmitry Vyukov
2018-09-13  8:37         ` Dmitry Vyukov
2018-09-13  8:37         ` Dmitry Vyukov
2018-09-13 18:09         ` Nick Desaulniers
2018-09-13 18:09           ` Nick Desaulniers
2018-09-13 18:09           ` Nick Desaulniers
2018-09-13 18:23           ` Jann Horn
2018-09-13 18:23             ` Jann Horn
2018-09-13 18:23             ` Jann Horn
2018-09-14  5:11           ` Dmitry Vyukov
2018-09-14  5:11             ` Dmitry Vyukov
2018-09-14  5:11             ` Dmitry Vyukov
2018-08-29 11:35 ` [PATCH v6 16/18] khwasan, mm, arm64: tag non slab memory allocated via pagealloc Andrey Konovalov
2018-08-29 11:35   ` Andrey Konovalov
2018-09-07 16:06   ` Andrey Ryabinin
2018-09-07 16:06     ` Andrey Ryabinin
2018-09-11 16:10     ` Andrey Konovalov
2018-09-11 16:10       ` Andrey Konovalov
2018-09-11 16:10       ` Andrey Konovalov
2018-08-29 11:35 ` [PATCH v6 17/18] khwasan: update kasan documentation Andrey Konovalov
2018-08-29 11:35   ` Andrey Konovalov
2018-09-12 18:39   ` Dmitry Vyukov
2018-09-12 18:39     ` Dmitry Vyukov
2018-09-12 18:39     ` Dmitry Vyukov
2018-09-18 18:42     ` Andrey Konovalov
2018-09-18 18:42       ` Andrey Konovalov
2018-09-18 18:42       ` Andrey Konovalov
2018-08-29 11:35 ` [PATCH v6 18/18] kasan: add SPDX-License-Identifier mark to source files Andrey Konovalov
2018-08-29 11:35   ` Andrey Konovalov
2018-09-05 21:10 ` [PATCH v6 00/18] khwasan: kernel hardware assisted address sanitizer Andrew Morton
2018-09-05 21:10   ` Andrew Morton
2018-09-05 21:10   ` Andrew Morton
2018-09-05 21:55   ` Nick Desaulniers
2018-09-05 21:55     ` Nick Desaulniers
2018-09-05 21:55     ` Nick Desaulniers
2018-09-06 10:05   ` Will Deacon
2018-09-06 10:05     ` Will Deacon
2018-09-06 10:05     ` Will Deacon
2018-09-06 11:06     ` Andrey Konovalov
2018-09-06 11:06       ` Andrey Konovalov
2018-09-06 11:06       ` Andrey Konovalov
2018-09-06 16:39       ` Nick Desaulniers
2018-09-06 16:39         ` Nick Desaulniers
2018-09-06 16:39         ` Nick Desaulniers
2018-09-14 15:28       ` Will Deacon
2018-09-14 15:28         ` Will Deacon
2018-09-14 15:28         ` Will Deacon
2018-09-19 18:53         ` Andrey Konovalov
2018-09-19 18:53           ` Andrey Konovalov
2018-09-19 18:53           ` Andrey Konovalov
2018-09-07 16:10 ` Andrey Ryabinin
2018-09-07 16:10   ` Andrey Ryabinin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CACT4Y+YicYhmzrKf84=oJJErdFKSNM70cmoN3m_zzERcUQ_-Fg@mail.gmail.com' \
    --to=dvyukov@google.com \
    --cc=Jacob.Bramley@arm.com \
    --cc=Lee.Smith@arm.com \
    --cc=Ramana.Radhakrishnan@arm.com \
    --cc=Ruben.Ayrapetyan@arm.com \
    --cc=akpm@linux-foundation.org \
    --cc=andreyknvl@google.com \
    --cc=ard.biesheuvel@linaro.org \
    --cc=arnd@arndb.de \
    --cc=aryabinin@virtuozzo.com \
    --cc=catalin.marinas@arm.com \
    --cc=cl@linux.com \
    --cc=cpandya@codeaurora.org \
    --cc=dave.martin@arm.com \
    --cc=ebiederm@xmission.com \
    --cc=eugenis@google.com \
    --cc=geert@linux-m68k.org \
    --cc=glider@google.com \
    --cc=gregkh@linuxfoundation.org \
    --cc=jannh@google.com \
    --cc=kasan-dev@googlegroups.com \
    --cc=kcc@google.com \
    --cc=kirill.shutemov@linux.intel.com \
    --cc=kstewart@linuxfoundation.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-kbuild@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-sparse@vger.kernel.org \
    --cc=marc.zyngier@arm.com \
    --cc=mark.rutland@arm.com \
    --cc=markbrand@google.com \
    --cc=mingo@kernel.org \
    --cc=ndesaulniers@google.com \
    --cc=paullawrence@google.com \
    --cc=rppt@linux.vnet.ibm.com \
    --cc=vishwath@google.com \
    --cc=will.deacon@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.