All of lore.kernel.org
 help / color / mirror / Atom feed
From: Alexander Potapenko <glider@google.com>
To: Andrey Konovalov <andreyknvl@google.com>
Cc: Dmitry Vyukov <dvyukov@google.com>,
	Vincenzo Frascino <vincenzo.frascino@arm.com>,
	Catalin Marinas <catalin.marinas@arm.com>,
	kasan-dev <kasan-dev@googlegroups.com>,
	Andrey Ryabinin <aryabinin@virtuozzo.com>,
	Marco Elver <elver@google.com>,
	Evgenii Stepanov <eugenis@google.com>,
	Elena Petrova <lenaptr@google.com>,
	Branislav Rankov <Branislav.Rankov@arm.com>,
	Kevin Brodsky <kevin.brodsky@arm.com>,
	Will Deacon <will.deacon@arm.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Linux ARM <linux-arm-kernel@lists.infradead.org>,
	Linux Memory Management List <linux-mm@kvack.org>,
	LKML <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH v2 05/37] kasan: rename KASAN_SHADOW_* to KASAN_GRANULE_*
Date: Fri, 18 Sep 2020 10:04:05 +0200	[thread overview]
Message-ID: <CAG_fn=X8uQoZUXM0cU8NwF41znWiFQS1GjSNtrh5-xM02-nnJw@mail.gmail.com> (raw)
In-Reply-To: <0d1862fec200eec644bbf0e2d5969fb94d2e923e.1600204505.git.andreyknvl@google.com>

On Tue, Sep 15, 2020 at 11:16 PM Andrey Konovalov <andreyknvl@google.com> wrote:
>
> This is a preparatory commit for the upcoming addition of a new hardware
> tag-based (MTE-based) KASAN mode.
>
> The new mode won't be using shadow memory, but will still use the concept
> of memory granules.

KASAN documentation doesn't seem to explain this concept anywhere (I
also checked the "kasan: add documentation for hardware tag-based
mode" patch), looks like it's only mentioned in MTE documentation.
Could you please elaborate on what we consider a granule in each of KASAN modes?

> Rename KASAN_SHADOW_SCALE_SIZE to KASAN_GRANULE_SIZE,
> and KASAN_SHADOW_MASK to KASAN_GRANULE_MASK.
>
> Also use MASK when used as a mask, otherwise use SIZE.
>
> No functional changes.
>
> Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
> Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
> ---
> Change-Id: Iac733e2248aa9d29f6fc425d8946ba07cca73ecf
> ---
>  Documentation/dev-tools/kasan.rst |  2 +-
>  lib/test_kasan.c                  |  2 +-
>  mm/kasan/common.c                 | 39 ++++++++++++++++---------------
>  mm/kasan/generic.c                | 14 +++++------
>  mm/kasan/generic_report.c         |  8 +++----
>  mm/kasan/init.c                   |  8 +++----
>  mm/kasan/kasan.h                  |  4 ++--
>  mm/kasan/report.c                 | 10 ++++----
>  mm/kasan/tags_report.c            |  2 +-
>  9 files changed, 45 insertions(+), 44 deletions(-)
>
> diff --git a/Documentation/dev-tools/kasan.rst b/Documentation/dev-tools/kasan.rst
> index 38fd5681fade..a3030fc6afe5 100644
> --- a/Documentation/dev-tools/kasan.rst
> +++ b/Documentation/dev-tools/kasan.rst
> @@ -264,7 +264,7 @@ Most mappings in vmalloc space are small, requiring less than a full
>  page of shadow space. Allocating a full shadow page per mapping would
>  therefore be wasteful. Furthermore, to ensure that different mappings
>  use different shadow pages, mappings would have to be aligned to
> -``KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE``.
> +``KASAN_GRANULE_SIZE * PAGE_SIZE``.
>
>  Instead, we share backing space across multiple mappings. We allocate
>  a backing page when a mapping in vmalloc space uses a particular page
> diff --git a/lib/test_kasan.c b/lib/test_kasan.c
> index 53e953bb1d1d..ddd0b80f24a1 100644
> --- a/lib/test_kasan.c
> +++ b/lib/test_kasan.c
> @@ -25,7 +25,7 @@
>
>  #include "../mm/kasan/kasan.h"
>
> -#define OOB_TAG_OFF (IS_ENABLED(CONFIG_KASAN_GENERIC) ? 0 : KASAN_SHADOW_SCALE_SIZE)
> +#define OOB_TAG_OFF (IS_ENABLED(CONFIG_KASAN_GENERIC) ? 0 : KASAN_GRANULE_SIZE)
>
>  /*
>   * We assign some test results to these globals to make sure the tests
> diff --git a/mm/kasan/common.c b/mm/kasan/common.c
> index 65933b27df81..c9daf2c33651 100644
> --- a/mm/kasan/common.c
> +++ b/mm/kasan/common.c
> @@ -111,7 +111,7 @@ void *memcpy(void *dest, const void *src, size_t len)
>
>  /*
>   * Poisons the shadow memory for 'size' bytes starting from 'addr'.
> - * Memory addresses should be aligned to KASAN_SHADOW_SCALE_SIZE.
> + * Memory addresses should be aligned to KASAN_GRANULE_SIZE.
>   */
>  void kasan_poison_memory(const void *address, size_t size, u8 value)
>  {
> @@ -143,13 +143,13 @@ void kasan_unpoison_memory(const void *address, size_t size)
>
>         kasan_poison_memory(address, size, tag);
>
> -       if (size & KASAN_SHADOW_MASK) {
> +       if (size & KASAN_GRANULE_MASK) {
>                 u8 *shadow = (u8 *)kasan_mem_to_shadow(address + size);
>
>                 if (IS_ENABLED(CONFIG_KASAN_SW_TAGS))
>                         *shadow = tag;
>                 else
> -                       *shadow = size & KASAN_SHADOW_MASK;
> +                       *shadow = size & KASAN_GRANULE_MASK;
>         }
>  }
>
> @@ -301,7 +301,7 @@ void kasan_unpoison_object_data(struct kmem_cache *cache, void *object)
>  void kasan_poison_object_data(struct kmem_cache *cache, void *object)
>  {
>         kasan_poison_memory(object,
> -                       round_up(cache->object_size, KASAN_SHADOW_SCALE_SIZE),
> +                       round_up(cache->object_size, KASAN_GRANULE_SIZE),
>                         KASAN_KMALLOC_REDZONE);
>  }
>
> @@ -373,7 +373,7 @@ static inline bool shadow_invalid(u8 tag, s8 shadow_byte)
>  {
>         if (IS_ENABLED(CONFIG_KASAN_GENERIC))
>                 return shadow_byte < 0 ||
> -                       shadow_byte >= KASAN_SHADOW_SCALE_SIZE;
> +                       shadow_byte >= KASAN_GRANULE_SIZE;
>
>         /* else CONFIG_KASAN_SW_TAGS: */
>         if ((u8)shadow_byte == KASAN_TAG_INVALID)
> @@ -412,7 +412,7 @@ static bool __kasan_slab_free(struct kmem_cache *cache, void *object,
>                 return true;
>         }
>
> -       rounded_up_size = round_up(cache->object_size, KASAN_SHADOW_SCALE_SIZE);
> +       rounded_up_size = round_up(cache->object_size, KASAN_GRANULE_SIZE);
>         kasan_poison_memory(object, rounded_up_size, KASAN_KMALLOC_FREE);
>
>         if ((IS_ENABLED(CONFIG_KASAN_GENERIC) && !quarantine) ||
> @@ -445,9 +445,9 @@ static void *__kasan_kmalloc(struct kmem_cache *cache, const void *object,
>                 return NULL;
>
>         redzone_start = round_up((unsigned long)(object + size),
> -                               KASAN_SHADOW_SCALE_SIZE);
> +                               KASAN_GRANULE_SIZE);
>         redzone_end = round_up((unsigned long)object + cache->object_size,
> -                               KASAN_SHADOW_SCALE_SIZE);
> +                               KASAN_GRANULE_SIZE);
>
>         if (IS_ENABLED(CONFIG_KASAN_SW_TAGS))
>                 tag = assign_tag(cache, object, false, keep_tag);
> @@ -491,7 +491,7 @@ void * __must_check kasan_kmalloc_large(const void *ptr, size_t size,
>
>         page = virt_to_page(ptr);
>         redzone_start = round_up((unsigned long)(ptr + size),
> -                               KASAN_SHADOW_SCALE_SIZE);
> +                               KASAN_GRANULE_SIZE);
>         redzone_end = (unsigned long)ptr + page_size(page);
>
>         kasan_unpoison_memory(ptr, size);
> @@ -589,8 +589,8 @@ static int __meminit kasan_mem_notifier(struct notifier_block *nb,
>         shadow_size = nr_shadow_pages << PAGE_SHIFT;
>         shadow_end = shadow_start + shadow_size;
>
> -       if (WARN_ON(mem_data->nr_pages % KASAN_SHADOW_SCALE_SIZE) ||
> -               WARN_ON(start_kaddr % (KASAN_SHADOW_SCALE_SIZE << PAGE_SHIFT)))
> +       if (WARN_ON(mem_data->nr_pages % KASAN_GRANULE_SIZE) ||
> +               WARN_ON(start_kaddr % (KASAN_GRANULE_SIZE << PAGE_SHIFT)))
>                 return NOTIFY_BAD;
>
>         switch (action) {
> @@ -748,7 +748,7 @@ void kasan_poison_vmalloc(const void *start, unsigned long size)
>         if (!is_vmalloc_or_module_addr(start))
>                 return;
>
> -       size = round_up(size, KASAN_SHADOW_SCALE_SIZE);
> +       size = round_up(size, KASAN_GRANULE_SIZE);
>         kasan_poison_memory(start, size, KASAN_VMALLOC_INVALID);
>  }
>
> @@ -861,22 +861,22 @@ void kasan_release_vmalloc(unsigned long start, unsigned long end,
>         unsigned long region_start, region_end;
>         unsigned long size;
>
> -       region_start = ALIGN(start, PAGE_SIZE * KASAN_SHADOW_SCALE_SIZE);
> -       region_end = ALIGN_DOWN(end, PAGE_SIZE * KASAN_SHADOW_SCALE_SIZE);
> +       region_start = ALIGN(start, PAGE_SIZE * KASAN_GRANULE_SIZE);
> +       region_end = ALIGN_DOWN(end, PAGE_SIZE * KASAN_GRANULE_SIZE);
>
>         free_region_start = ALIGN(free_region_start,
> -                                 PAGE_SIZE * KASAN_SHADOW_SCALE_SIZE);
> +                                 PAGE_SIZE * KASAN_GRANULE_SIZE);
>
>         if (start != region_start &&
>             free_region_start < region_start)
> -               region_start -= PAGE_SIZE * KASAN_SHADOW_SCALE_SIZE;
> +               region_start -= PAGE_SIZE * KASAN_GRANULE_SIZE;
>
>         free_region_end = ALIGN_DOWN(free_region_end,
> -                                    PAGE_SIZE * KASAN_SHADOW_SCALE_SIZE);
> +                                    PAGE_SIZE * KASAN_GRANULE_SIZE);
>
>         if (end != region_end &&
>             free_region_end > region_end)
> -               region_end += PAGE_SIZE * KASAN_SHADOW_SCALE_SIZE;
> +               region_end += PAGE_SIZE * KASAN_GRANULE_SIZE;
>
>         shadow_start = kasan_mem_to_shadow((void *)region_start);
>         shadow_end = kasan_mem_to_shadow((void *)region_end);
> @@ -902,7 +902,8 @@ int kasan_module_alloc(void *addr, size_t size)
>         unsigned long shadow_start;
>
>         shadow_start = (unsigned long)kasan_mem_to_shadow(addr);
> -       scaled_size = (size + KASAN_SHADOW_MASK) >> KASAN_SHADOW_SCALE_SHIFT;
> +       scaled_size = (size + KASAN_GRANULE_SIZE - 1) >>
> +                               KASAN_SHADOW_SCALE_SHIFT;
>         shadow_size = round_up(scaled_size, PAGE_SIZE);
>
>         if (WARN_ON(!PAGE_ALIGNED(shadow_start)))
> diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c
> index 4b5f905198d8..f6d68aa9872f 100644
> --- a/mm/kasan/generic.c
> +++ b/mm/kasan/generic.c
> @@ -51,7 +51,7 @@ static __always_inline bool memory_is_poisoned_1(unsigned long addr)
>         s8 shadow_value = *(s8 *)kasan_mem_to_shadow((void *)addr);
>
>         if (unlikely(shadow_value)) {
> -               s8 last_accessible_byte = addr & KASAN_SHADOW_MASK;
> +               s8 last_accessible_byte = addr & KASAN_GRANULE_MASK;
>                 return unlikely(last_accessible_byte >= shadow_value);
>         }
>
> @@ -67,7 +67,7 @@ static __always_inline bool memory_is_poisoned_2_4_8(unsigned long addr,
>          * Access crosses 8(shadow size)-byte boundary. Such access maps
>          * into 2 shadow bytes, so we need to check them both.
>          */
> -       if (unlikely(((addr + size - 1) & KASAN_SHADOW_MASK) < size - 1))
> +       if (unlikely(((addr + size - 1) & KASAN_GRANULE_MASK) < size - 1))
>                 return *shadow_addr || memory_is_poisoned_1(addr + size - 1);
>
>         return memory_is_poisoned_1(addr + size - 1);
> @@ -78,7 +78,7 @@ static __always_inline bool memory_is_poisoned_16(unsigned long addr)
>         u16 *shadow_addr = (u16 *)kasan_mem_to_shadow((void *)addr);
>
>         /* Unaligned 16-bytes access maps into 3 shadow bytes. */
> -       if (unlikely(!IS_ALIGNED(addr, KASAN_SHADOW_SCALE_SIZE)))
> +       if (unlikely(!IS_ALIGNED(addr, KASAN_GRANULE_SIZE)))
>                 return *shadow_addr || memory_is_poisoned_1(addr + 15);
>
>         return *shadow_addr;
> @@ -139,7 +139,7 @@ static __always_inline bool memory_is_poisoned_n(unsigned long addr,
>                 s8 *last_shadow = (s8 *)kasan_mem_to_shadow((void *)last_byte);
>
>                 if (unlikely(ret != (unsigned long)last_shadow ||
> -                       ((long)(last_byte & KASAN_SHADOW_MASK) >= *last_shadow)))
> +                       ((long)(last_byte & KASAN_GRANULE_MASK) >= *last_shadow)))
>                         return true;
>         }
>         return false;
> @@ -205,7 +205,7 @@ void kasan_cache_shutdown(struct kmem_cache *cache)
>
>  static void register_global(struct kasan_global *global)
>  {
> -       size_t aligned_size = round_up(global->size, KASAN_SHADOW_SCALE_SIZE);
> +       size_t aligned_size = round_up(global->size, KASAN_GRANULE_SIZE);
>
>         kasan_unpoison_memory(global->beg, global->size);
>
> @@ -279,10 +279,10 @@ EXPORT_SYMBOL(__asan_handle_no_return);
>  /* Emitted by compiler to poison alloca()ed objects. */
>  void __asan_alloca_poison(unsigned long addr, size_t size)
>  {
> -       size_t rounded_up_size = round_up(size, KASAN_SHADOW_SCALE_SIZE);
> +       size_t rounded_up_size = round_up(size, KASAN_GRANULE_SIZE);
>         size_t padding_size = round_up(size, KASAN_ALLOCA_REDZONE_SIZE) -
>                         rounded_up_size;
> -       size_t rounded_down_size = round_down(size, KASAN_SHADOW_SCALE_SIZE);
> +       size_t rounded_down_size = round_down(size, KASAN_GRANULE_SIZE);
>
>         const void *left_redzone = (const void *)(addr -
>                         KASAN_ALLOCA_REDZONE_SIZE);
> diff --git a/mm/kasan/generic_report.c b/mm/kasan/generic_report.c
> index a38c7a9e192a..4dce1633b082 100644
> --- a/mm/kasan/generic_report.c
> +++ b/mm/kasan/generic_report.c
> @@ -39,7 +39,7 @@ void *find_first_bad_addr(void *addr, size_t size)
>         void *p = addr;
>
>         while (p < addr + size && !(*(u8 *)kasan_mem_to_shadow(p)))
> -               p += KASAN_SHADOW_SCALE_SIZE;
> +               p += KASAN_GRANULE_SIZE;
>         return p;
>  }
>
> @@ -51,14 +51,14 @@ static const char *get_shadow_bug_type(struct kasan_access_info *info)
>         shadow_addr = (u8 *)kasan_mem_to_shadow(info->first_bad_addr);
>
>         /*
> -        * If shadow byte value is in [0, KASAN_SHADOW_SCALE_SIZE) we can look
> +        * If shadow byte value is in [0, KASAN_GRANULE_SIZE) we can look
>          * at the next shadow byte to determine the type of the bad access.
>          */
> -       if (*shadow_addr > 0 && *shadow_addr <= KASAN_SHADOW_SCALE_SIZE - 1)
> +       if (*shadow_addr > 0 && *shadow_addr <= KASAN_GRANULE_SIZE - 1)
>                 shadow_addr++;
>
>         switch (*shadow_addr) {
> -       case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
> +       case 0 ... KASAN_GRANULE_SIZE - 1:
>                 /*
>                  * In theory it's still possible to see these shadow values
>                  * due to a data race in the kernel code.
> diff --git a/mm/kasan/init.c b/mm/kasan/init.c
> index fe6be0be1f76..754b641c83c7 100644
> --- a/mm/kasan/init.c
> +++ b/mm/kasan/init.c
> @@ -447,8 +447,8 @@ void kasan_remove_zero_shadow(void *start, unsigned long size)
>         end = addr + (size >> KASAN_SHADOW_SCALE_SHIFT);
>
>         if (WARN_ON((unsigned long)start %
> -                       (KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE)) ||
> -           WARN_ON(size % (KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE)))
> +                       (KASAN_GRANULE_SIZE * PAGE_SIZE)) ||
> +           WARN_ON(size % (KASAN_GRANULE_SIZE * PAGE_SIZE)))
>                 return;
>
>         for (; addr < end; addr = next) {
> @@ -482,8 +482,8 @@ int kasan_add_zero_shadow(void *start, unsigned long size)
>         shadow_end = shadow_start + (size >> KASAN_SHADOW_SCALE_SHIFT);
>
>         if (WARN_ON((unsigned long)start %
> -                       (KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE)) ||
> -           WARN_ON(size % (KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE)))
> +                       (KASAN_GRANULE_SIZE * PAGE_SIZE)) ||
> +           WARN_ON(size % (KASAN_GRANULE_SIZE * PAGE_SIZE)))
>                 return -EINVAL;
>
>         ret = kasan_populate_early_shadow(shadow_start, shadow_end);
> diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
> index 03450d3b31f7..c31e2c739301 100644
> --- a/mm/kasan/kasan.h
> +++ b/mm/kasan/kasan.h
> @@ -5,8 +5,8 @@
>  #include <linux/kasan.h>
>  #include <linux/stackdepot.h>
>
> -#define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT)
> -#define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
> +#define KASAN_GRANULE_SIZE     (1UL << KASAN_SHADOW_SCALE_SHIFT)
> +#define KASAN_GRANULE_MASK     (KASAN_GRANULE_SIZE - 1)
>
>  #define KASAN_TAG_KERNEL       0xFF /* native kernel pointers tag */
>  #define KASAN_TAG_INVALID      0xFE /* inaccessible memory tag */
> diff --git a/mm/kasan/report.c b/mm/kasan/report.c
> index 4f49fa6cd1aa..7c025d792e2f 100644
> --- a/mm/kasan/report.c
> +++ b/mm/kasan/report.c
> @@ -317,24 +317,24 @@ static bool __must_check get_address_stack_frame_info(const void *addr,
>                 return false;
>
>         aligned_addr = round_down((unsigned long)addr, sizeof(long));
> -       mem_ptr = round_down(aligned_addr, KASAN_SHADOW_SCALE_SIZE);
> +       mem_ptr = round_down(aligned_addr, KASAN_GRANULE_SIZE);
>         shadow_ptr = kasan_mem_to_shadow((void *)aligned_addr);
>         shadow_bottom = kasan_mem_to_shadow(end_of_stack(current));
>
>         while (shadow_ptr >= shadow_bottom && *shadow_ptr != KASAN_STACK_LEFT) {
>                 shadow_ptr--;
> -               mem_ptr -= KASAN_SHADOW_SCALE_SIZE;
> +               mem_ptr -= KASAN_GRANULE_SIZE;
>         }
>
>         while (shadow_ptr >= shadow_bottom && *shadow_ptr == KASAN_STACK_LEFT) {
>                 shadow_ptr--;
> -               mem_ptr -= KASAN_SHADOW_SCALE_SIZE;
> +               mem_ptr -= KASAN_GRANULE_SIZE;
>         }
>
>         if (shadow_ptr < shadow_bottom)
>                 return false;
>
> -       frame = (const unsigned long *)(mem_ptr + KASAN_SHADOW_SCALE_SIZE);
> +       frame = (const unsigned long *)(mem_ptr + KASAN_GRANULE_SIZE);
>         if (frame[0] != KASAN_CURRENT_STACK_FRAME_MAGIC) {
>                 pr_err("KASAN internal error: frame info validation failed; invalid marker: %lu\n",
>                        frame[0]);
> @@ -572,6 +572,6 @@ void kasan_non_canonical_hook(unsigned long addr)
>         else
>                 bug_type = "maybe wild-memory-access";
>         pr_alert("KASAN: %s in range [0x%016lx-0x%016lx]\n", bug_type,
> -                orig_addr, orig_addr + KASAN_SHADOW_MASK);
> +                orig_addr, orig_addr + KASAN_GRANULE_SIZE - 1);
>  }
>  #endif
> diff --git a/mm/kasan/tags_report.c b/mm/kasan/tags_report.c
> index bee43717d6f0..6ddb55676a7c 100644
> --- a/mm/kasan/tags_report.c
> +++ b/mm/kasan/tags_report.c
> @@ -81,7 +81,7 @@ void *find_first_bad_addr(void *addr, size_t size)
>         void *end = p + size;
>
>         while (p < end && tag == *(u8 *)kasan_mem_to_shadow(p))
> -               p += KASAN_SHADOW_SCALE_SIZE;
> +               p += KASAN_GRANULE_SIZE;
>         return p;
>  }
>
> --
> 2.28.0.618.gf4bc123cb7-goog
>


-- 
Alexander Potapenko
Software Engineer

Google Germany GmbH
Erika-Mann-Straße, 33
80636 München

Geschäftsführer: Paul Manicle, Halimah DeLaine Prado
Registergericht und -nummer: Hamburg, HRB 86891
Sitz der Gesellschaft: Hamburg

WARNING: multiple messages have this Message-ID (diff)
From: Alexander Potapenko <glider@google.com>
To: Andrey Konovalov <andreyknvl@google.com>
Cc: Linux ARM <linux-arm-kernel@lists.infradead.org>,
	Marco Elver <elver@google.com>,
	Elena Petrova <lenaptr@google.com>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Kevin Brodsky <kevin.brodsky@arm.com>,
	Will Deacon <will.deacon@arm.com>,
	Branislav Rankov <Branislav.Rankov@arm.com>,
	kasan-dev <kasan-dev@googlegroups.com>,
	LKML <linux-kernel@vger.kernel.org>,
	Linux Memory Management List <linux-mm@kvack.org>,
	Evgenii Stepanov <eugenis@google.com>,
	Andrey Ryabinin <aryabinin@virtuozzo.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Vincenzo Frascino <vincenzo.frascino@arm.com>,
	Dmitry Vyukov <dvyukov@google.com>
Subject: Re: [PATCH v2 05/37] kasan: rename KASAN_SHADOW_* to KASAN_GRANULE_*
Date: Fri, 18 Sep 2020 10:04:05 +0200	[thread overview]
Message-ID: <CAG_fn=X8uQoZUXM0cU8NwF41znWiFQS1GjSNtrh5-xM02-nnJw@mail.gmail.com> (raw)
In-Reply-To: <0d1862fec200eec644bbf0e2d5969fb94d2e923e.1600204505.git.andreyknvl@google.com>

On Tue, Sep 15, 2020 at 11:16 PM Andrey Konovalov <andreyknvl@google.com> wrote:
>
> This is a preparatory commit for the upcoming addition of a new hardware
> tag-based (MTE-based) KASAN mode.
>
> The new mode won't be using shadow memory, but will still use the concept
> of memory granules.

KASAN documentation doesn't seem to explain this concept anywhere (I
also checked the "kasan: add documentation for hardware tag-based
mode" patch), looks like it's only mentioned in MTE documentation.
Could you please elaborate on what we consider a granule in each of KASAN modes?

> Rename KASAN_SHADOW_SCALE_SIZE to KASAN_GRANULE_SIZE,
> and KASAN_SHADOW_MASK to KASAN_GRANULE_MASK.
>
> Also use MASK when used as a mask, otherwise use SIZE.
>
> No functional changes.
>
> Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
> Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
> ---
> Change-Id: Iac733e2248aa9d29f6fc425d8946ba07cca73ecf
> ---
>  Documentation/dev-tools/kasan.rst |  2 +-
>  lib/test_kasan.c                  |  2 +-
>  mm/kasan/common.c                 | 39 ++++++++++++++++---------------
>  mm/kasan/generic.c                | 14 +++++------
>  mm/kasan/generic_report.c         |  8 +++----
>  mm/kasan/init.c                   |  8 +++----
>  mm/kasan/kasan.h                  |  4 ++--
>  mm/kasan/report.c                 | 10 ++++----
>  mm/kasan/tags_report.c            |  2 +-
>  9 files changed, 45 insertions(+), 44 deletions(-)
>
> diff --git a/Documentation/dev-tools/kasan.rst b/Documentation/dev-tools/kasan.rst
> index 38fd5681fade..a3030fc6afe5 100644
> --- a/Documentation/dev-tools/kasan.rst
> +++ b/Documentation/dev-tools/kasan.rst
> @@ -264,7 +264,7 @@ Most mappings in vmalloc space are small, requiring less than a full
>  page of shadow space. Allocating a full shadow page per mapping would
>  therefore be wasteful. Furthermore, to ensure that different mappings
>  use different shadow pages, mappings would have to be aligned to
> -``KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE``.
> +``KASAN_GRANULE_SIZE * PAGE_SIZE``.
>
>  Instead, we share backing space across multiple mappings. We allocate
>  a backing page when a mapping in vmalloc space uses a particular page
> diff --git a/lib/test_kasan.c b/lib/test_kasan.c
> index 53e953bb1d1d..ddd0b80f24a1 100644
> --- a/lib/test_kasan.c
> +++ b/lib/test_kasan.c
> @@ -25,7 +25,7 @@
>
>  #include "../mm/kasan/kasan.h"
>
> -#define OOB_TAG_OFF (IS_ENABLED(CONFIG_KASAN_GENERIC) ? 0 : KASAN_SHADOW_SCALE_SIZE)
> +#define OOB_TAG_OFF (IS_ENABLED(CONFIG_KASAN_GENERIC) ? 0 : KASAN_GRANULE_SIZE)
>
>  /*
>   * We assign some test results to these globals to make sure the tests
> diff --git a/mm/kasan/common.c b/mm/kasan/common.c
> index 65933b27df81..c9daf2c33651 100644
> --- a/mm/kasan/common.c
> +++ b/mm/kasan/common.c
> @@ -111,7 +111,7 @@ void *memcpy(void *dest, const void *src, size_t len)
>
>  /*
>   * Poisons the shadow memory for 'size' bytes starting from 'addr'.
> - * Memory addresses should be aligned to KASAN_SHADOW_SCALE_SIZE.
> + * Memory addresses should be aligned to KASAN_GRANULE_SIZE.
>   */
>  void kasan_poison_memory(const void *address, size_t size, u8 value)
>  {
> @@ -143,13 +143,13 @@ void kasan_unpoison_memory(const void *address, size_t size)
>
>         kasan_poison_memory(address, size, tag);
>
> -       if (size & KASAN_SHADOW_MASK) {
> +       if (size & KASAN_GRANULE_MASK) {
>                 u8 *shadow = (u8 *)kasan_mem_to_shadow(address + size);
>
>                 if (IS_ENABLED(CONFIG_KASAN_SW_TAGS))
>                         *shadow = tag;
>                 else
> -                       *shadow = size & KASAN_SHADOW_MASK;
> +                       *shadow = size & KASAN_GRANULE_MASK;
>         }
>  }
>
> @@ -301,7 +301,7 @@ void kasan_unpoison_object_data(struct kmem_cache *cache, void *object)
>  void kasan_poison_object_data(struct kmem_cache *cache, void *object)
>  {
>         kasan_poison_memory(object,
> -                       round_up(cache->object_size, KASAN_SHADOW_SCALE_SIZE),
> +                       round_up(cache->object_size, KASAN_GRANULE_SIZE),
>                         KASAN_KMALLOC_REDZONE);
>  }
>
> @@ -373,7 +373,7 @@ static inline bool shadow_invalid(u8 tag, s8 shadow_byte)
>  {
>         if (IS_ENABLED(CONFIG_KASAN_GENERIC))
>                 return shadow_byte < 0 ||
> -                       shadow_byte >= KASAN_SHADOW_SCALE_SIZE;
> +                       shadow_byte >= KASAN_GRANULE_SIZE;
>
>         /* else CONFIG_KASAN_SW_TAGS: */
>         if ((u8)shadow_byte == KASAN_TAG_INVALID)
> @@ -412,7 +412,7 @@ static bool __kasan_slab_free(struct kmem_cache *cache, void *object,
>                 return true;
>         }
>
> -       rounded_up_size = round_up(cache->object_size, KASAN_SHADOW_SCALE_SIZE);
> +       rounded_up_size = round_up(cache->object_size, KASAN_GRANULE_SIZE);
>         kasan_poison_memory(object, rounded_up_size, KASAN_KMALLOC_FREE);
>
>         if ((IS_ENABLED(CONFIG_KASAN_GENERIC) && !quarantine) ||
> @@ -445,9 +445,9 @@ static void *__kasan_kmalloc(struct kmem_cache *cache, const void *object,
>                 return NULL;
>
>         redzone_start = round_up((unsigned long)(object + size),
> -                               KASAN_SHADOW_SCALE_SIZE);
> +                               KASAN_GRANULE_SIZE);
>         redzone_end = round_up((unsigned long)object + cache->object_size,
> -                               KASAN_SHADOW_SCALE_SIZE);
> +                               KASAN_GRANULE_SIZE);
>
>         if (IS_ENABLED(CONFIG_KASAN_SW_TAGS))
>                 tag = assign_tag(cache, object, false, keep_tag);
> @@ -491,7 +491,7 @@ void * __must_check kasan_kmalloc_large(const void *ptr, size_t size,
>
>         page = virt_to_page(ptr);
>         redzone_start = round_up((unsigned long)(ptr + size),
> -                               KASAN_SHADOW_SCALE_SIZE);
> +                               KASAN_GRANULE_SIZE);
>         redzone_end = (unsigned long)ptr + page_size(page);
>
>         kasan_unpoison_memory(ptr, size);
> @@ -589,8 +589,8 @@ static int __meminit kasan_mem_notifier(struct notifier_block *nb,
>         shadow_size = nr_shadow_pages << PAGE_SHIFT;
>         shadow_end = shadow_start + shadow_size;
>
> -       if (WARN_ON(mem_data->nr_pages % KASAN_SHADOW_SCALE_SIZE) ||
> -               WARN_ON(start_kaddr % (KASAN_SHADOW_SCALE_SIZE << PAGE_SHIFT)))
> +       if (WARN_ON(mem_data->nr_pages % KASAN_GRANULE_SIZE) ||
> +               WARN_ON(start_kaddr % (KASAN_GRANULE_SIZE << PAGE_SHIFT)))
>                 return NOTIFY_BAD;
>
>         switch (action) {
> @@ -748,7 +748,7 @@ void kasan_poison_vmalloc(const void *start, unsigned long size)
>         if (!is_vmalloc_or_module_addr(start))
>                 return;
>
> -       size = round_up(size, KASAN_SHADOW_SCALE_SIZE);
> +       size = round_up(size, KASAN_GRANULE_SIZE);
>         kasan_poison_memory(start, size, KASAN_VMALLOC_INVALID);
>  }
>
> @@ -861,22 +861,22 @@ void kasan_release_vmalloc(unsigned long start, unsigned long end,
>         unsigned long region_start, region_end;
>         unsigned long size;
>
> -       region_start = ALIGN(start, PAGE_SIZE * KASAN_SHADOW_SCALE_SIZE);
> -       region_end = ALIGN_DOWN(end, PAGE_SIZE * KASAN_SHADOW_SCALE_SIZE);
> +       region_start = ALIGN(start, PAGE_SIZE * KASAN_GRANULE_SIZE);
> +       region_end = ALIGN_DOWN(end, PAGE_SIZE * KASAN_GRANULE_SIZE);
>
>         free_region_start = ALIGN(free_region_start,
> -                                 PAGE_SIZE * KASAN_SHADOW_SCALE_SIZE);
> +                                 PAGE_SIZE * KASAN_GRANULE_SIZE);
>
>         if (start != region_start &&
>             free_region_start < region_start)
> -               region_start -= PAGE_SIZE * KASAN_SHADOW_SCALE_SIZE;
> +               region_start -= PAGE_SIZE * KASAN_GRANULE_SIZE;
>
>         free_region_end = ALIGN_DOWN(free_region_end,
> -                                    PAGE_SIZE * KASAN_SHADOW_SCALE_SIZE);
> +                                    PAGE_SIZE * KASAN_GRANULE_SIZE);
>
>         if (end != region_end &&
>             free_region_end > region_end)
> -               region_end += PAGE_SIZE * KASAN_SHADOW_SCALE_SIZE;
> +               region_end += PAGE_SIZE * KASAN_GRANULE_SIZE;
>
>         shadow_start = kasan_mem_to_shadow((void *)region_start);
>         shadow_end = kasan_mem_to_shadow((void *)region_end);
> @@ -902,7 +902,8 @@ int kasan_module_alloc(void *addr, size_t size)
>         unsigned long shadow_start;
>
>         shadow_start = (unsigned long)kasan_mem_to_shadow(addr);
> -       scaled_size = (size + KASAN_SHADOW_MASK) >> KASAN_SHADOW_SCALE_SHIFT;
> +       scaled_size = (size + KASAN_GRANULE_SIZE - 1) >>
> +                               KASAN_SHADOW_SCALE_SHIFT;
>         shadow_size = round_up(scaled_size, PAGE_SIZE);
>
>         if (WARN_ON(!PAGE_ALIGNED(shadow_start)))
> diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c
> index 4b5f905198d8..f6d68aa9872f 100644
> --- a/mm/kasan/generic.c
> +++ b/mm/kasan/generic.c
> @@ -51,7 +51,7 @@ static __always_inline bool memory_is_poisoned_1(unsigned long addr)
>         s8 shadow_value = *(s8 *)kasan_mem_to_shadow((void *)addr);
>
>         if (unlikely(shadow_value)) {
> -               s8 last_accessible_byte = addr & KASAN_SHADOW_MASK;
> +               s8 last_accessible_byte = addr & KASAN_GRANULE_MASK;
>                 return unlikely(last_accessible_byte >= shadow_value);
>         }
>
> @@ -67,7 +67,7 @@ static __always_inline bool memory_is_poisoned_2_4_8(unsigned long addr,
>          * Access crosses 8(shadow size)-byte boundary. Such access maps
>          * into 2 shadow bytes, so we need to check them both.
>          */
> -       if (unlikely(((addr + size - 1) & KASAN_SHADOW_MASK) < size - 1))
> +       if (unlikely(((addr + size - 1) & KASAN_GRANULE_MASK) < size - 1))
>                 return *shadow_addr || memory_is_poisoned_1(addr + size - 1);
>
>         return memory_is_poisoned_1(addr + size - 1);
> @@ -78,7 +78,7 @@ static __always_inline bool memory_is_poisoned_16(unsigned long addr)
>         u16 *shadow_addr = (u16 *)kasan_mem_to_shadow((void *)addr);
>
>         /* Unaligned 16-bytes access maps into 3 shadow bytes. */
> -       if (unlikely(!IS_ALIGNED(addr, KASAN_SHADOW_SCALE_SIZE)))
> +       if (unlikely(!IS_ALIGNED(addr, KASAN_GRANULE_SIZE)))
>                 return *shadow_addr || memory_is_poisoned_1(addr + 15);
>
>         return *shadow_addr;
> @@ -139,7 +139,7 @@ static __always_inline bool memory_is_poisoned_n(unsigned long addr,
>                 s8 *last_shadow = (s8 *)kasan_mem_to_shadow((void *)last_byte);
>
>                 if (unlikely(ret != (unsigned long)last_shadow ||
> -                       ((long)(last_byte & KASAN_SHADOW_MASK) >= *last_shadow)))
> +                       ((long)(last_byte & KASAN_GRANULE_MASK) >= *last_shadow)))
>                         return true;
>         }
>         return false;
> @@ -205,7 +205,7 @@ void kasan_cache_shutdown(struct kmem_cache *cache)
>
>  static void register_global(struct kasan_global *global)
>  {
> -       size_t aligned_size = round_up(global->size, KASAN_SHADOW_SCALE_SIZE);
> +       size_t aligned_size = round_up(global->size, KASAN_GRANULE_SIZE);
>
>         kasan_unpoison_memory(global->beg, global->size);
>
> @@ -279,10 +279,10 @@ EXPORT_SYMBOL(__asan_handle_no_return);
>  /* Emitted by compiler to poison alloca()ed objects. */
>  void __asan_alloca_poison(unsigned long addr, size_t size)
>  {
> -       size_t rounded_up_size = round_up(size, KASAN_SHADOW_SCALE_SIZE);
> +       size_t rounded_up_size = round_up(size, KASAN_GRANULE_SIZE);
>         size_t padding_size = round_up(size, KASAN_ALLOCA_REDZONE_SIZE) -
>                         rounded_up_size;
> -       size_t rounded_down_size = round_down(size, KASAN_SHADOW_SCALE_SIZE);
> +       size_t rounded_down_size = round_down(size, KASAN_GRANULE_SIZE);
>
>         const void *left_redzone = (const void *)(addr -
>                         KASAN_ALLOCA_REDZONE_SIZE);
> diff --git a/mm/kasan/generic_report.c b/mm/kasan/generic_report.c
> index a38c7a9e192a..4dce1633b082 100644
> --- a/mm/kasan/generic_report.c
> +++ b/mm/kasan/generic_report.c
> @@ -39,7 +39,7 @@ void *find_first_bad_addr(void *addr, size_t size)
>         void *p = addr;
>
>         while (p < addr + size && !(*(u8 *)kasan_mem_to_shadow(p)))
> -               p += KASAN_SHADOW_SCALE_SIZE;
> +               p += KASAN_GRANULE_SIZE;
>         return p;
>  }
>
> @@ -51,14 +51,14 @@ static const char *get_shadow_bug_type(struct kasan_access_info *info)
>         shadow_addr = (u8 *)kasan_mem_to_shadow(info->first_bad_addr);
>
>         /*
> -        * If shadow byte value is in [0, KASAN_SHADOW_SCALE_SIZE) we can look
> +        * If shadow byte value is in [0, KASAN_GRANULE_SIZE) we can look
>          * at the next shadow byte to determine the type of the bad access.
>          */
> -       if (*shadow_addr > 0 && *shadow_addr <= KASAN_SHADOW_SCALE_SIZE - 1)
> +       if (*shadow_addr > 0 && *shadow_addr <= KASAN_GRANULE_SIZE - 1)
>                 shadow_addr++;
>
>         switch (*shadow_addr) {
> -       case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
> +       case 0 ... KASAN_GRANULE_SIZE - 1:
>                 /*
>                  * In theory it's still possible to see these shadow values
>                  * due to a data race in the kernel code.
> diff --git a/mm/kasan/init.c b/mm/kasan/init.c
> index fe6be0be1f76..754b641c83c7 100644
> --- a/mm/kasan/init.c
> +++ b/mm/kasan/init.c
> @@ -447,8 +447,8 @@ void kasan_remove_zero_shadow(void *start, unsigned long size)
>         end = addr + (size >> KASAN_SHADOW_SCALE_SHIFT);
>
>         if (WARN_ON((unsigned long)start %
> -                       (KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE)) ||
> -           WARN_ON(size % (KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE)))
> +                       (KASAN_GRANULE_SIZE * PAGE_SIZE)) ||
> +           WARN_ON(size % (KASAN_GRANULE_SIZE * PAGE_SIZE)))
>                 return;
>
>         for (; addr < end; addr = next) {
> @@ -482,8 +482,8 @@ int kasan_add_zero_shadow(void *start, unsigned long size)
>         shadow_end = shadow_start + (size >> KASAN_SHADOW_SCALE_SHIFT);
>
>         if (WARN_ON((unsigned long)start %
> -                       (KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE)) ||
> -           WARN_ON(size % (KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE)))
> +                       (KASAN_GRANULE_SIZE * PAGE_SIZE)) ||
> +           WARN_ON(size % (KASAN_GRANULE_SIZE * PAGE_SIZE)))
>                 return -EINVAL;
>
>         ret = kasan_populate_early_shadow(shadow_start, shadow_end);
> diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
> index 03450d3b31f7..c31e2c739301 100644
> --- a/mm/kasan/kasan.h
> +++ b/mm/kasan/kasan.h
> @@ -5,8 +5,8 @@
>  #include <linux/kasan.h>
>  #include <linux/stackdepot.h>
>
> -#define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT)
> -#define KASAN_SHADOW_MASK       (KASAN_SHADOW_SCALE_SIZE - 1)
> +#define KASAN_GRANULE_SIZE     (1UL << KASAN_SHADOW_SCALE_SHIFT)
> +#define KASAN_GRANULE_MASK     (KASAN_GRANULE_SIZE - 1)
>
>  #define KASAN_TAG_KERNEL       0xFF /* native kernel pointers tag */
>  #define KASAN_TAG_INVALID      0xFE /* inaccessible memory tag */
> diff --git a/mm/kasan/report.c b/mm/kasan/report.c
> index 4f49fa6cd1aa..7c025d792e2f 100644
> --- a/mm/kasan/report.c
> +++ b/mm/kasan/report.c
> @@ -317,24 +317,24 @@ static bool __must_check get_address_stack_frame_info(const void *addr,
>                 return false;
>
>         aligned_addr = round_down((unsigned long)addr, sizeof(long));
> -       mem_ptr = round_down(aligned_addr, KASAN_SHADOW_SCALE_SIZE);
> +       mem_ptr = round_down(aligned_addr, KASAN_GRANULE_SIZE);
>         shadow_ptr = kasan_mem_to_shadow((void *)aligned_addr);
>         shadow_bottom = kasan_mem_to_shadow(end_of_stack(current));
>
>         while (shadow_ptr >= shadow_bottom && *shadow_ptr != KASAN_STACK_LEFT) {
>                 shadow_ptr--;
> -               mem_ptr -= KASAN_SHADOW_SCALE_SIZE;
> +               mem_ptr -= KASAN_GRANULE_SIZE;
>         }
>
>         while (shadow_ptr >= shadow_bottom && *shadow_ptr == KASAN_STACK_LEFT) {
>                 shadow_ptr--;
> -               mem_ptr -= KASAN_SHADOW_SCALE_SIZE;
> +               mem_ptr -= KASAN_GRANULE_SIZE;
>         }
>
>         if (shadow_ptr < shadow_bottom)
>                 return false;
>
> -       frame = (const unsigned long *)(mem_ptr + KASAN_SHADOW_SCALE_SIZE);
> +       frame = (const unsigned long *)(mem_ptr + KASAN_GRANULE_SIZE);
>         if (frame[0] != KASAN_CURRENT_STACK_FRAME_MAGIC) {
>                 pr_err("KASAN internal error: frame info validation failed; invalid marker: %lu\n",
>                        frame[0]);
> @@ -572,6 +572,6 @@ void kasan_non_canonical_hook(unsigned long addr)
>         else
>                 bug_type = "maybe wild-memory-access";
>         pr_alert("KASAN: %s in range [0x%016lx-0x%016lx]\n", bug_type,
> -                orig_addr, orig_addr + KASAN_SHADOW_MASK);
> +                orig_addr, orig_addr + KASAN_GRANULE_SIZE - 1);
>  }
>  #endif
> diff --git a/mm/kasan/tags_report.c b/mm/kasan/tags_report.c
> index bee43717d6f0..6ddb55676a7c 100644
> --- a/mm/kasan/tags_report.c
> +++ b/mm/kasan/tags_report.c
> @@ -81,7 +81,7 @@ void *find_first_bad_addr(void *addr, size_t size)
>         void *end = p + size;
>
>         while (p < end && tag == *(u8 *)kasan_mem_to_shadow(p))
> -               p += KASAN_SHADOW_SCALE_SIZE;
> +               p += KASAN_GRANULE_SIZE;
>         return p;
>  }
>
> --
> 2.28.0.618.gf4bc123cb7-goog
>


-- 
Alexander Potapenko
Software Engineer

Google Germany GmbH
Erika-Mann-Straße, 33
80636 München

Geschäftsführer: Paul Manicle, Halimah DeLaine Prado
Registergericht und -nummer: Hamburg, HRB 86891
Sitz der Gesellschaft: Hamburg

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  reply	other threads:[~2020-09-18  8:04 UTC|newest]

Thread overview: 237+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-09-15 21:15 [PATCH v2 00/37] kasan: add hardware tag-based mode for arm64 Andrey Konovalov
2020-09-15 21:15 ` Andrey Konovalov
2020-09-15 21:15 ` Andrey Konovalov
2020-09-15 21:15 ` [PATCH v2 01/37] kasan: KASAN_VMALLOC depends on KASAN_GENERIC Andrey Konovalov
2020-09-15 21:15   ` Andrey Konovalov
2020-09-15 21:15   ` Andrey Konovalov
2020-09-15 21:15 ` [PATCH v2 02/37] kasan: group vmalloc code Andrey Konovalov
2020-09-15 21:15   ` Andrey Konovalov
2020-09-15 21:15   ` Andrey Konovalov
2020-09-15 21:15 ` [PATCH v2 03/37] kasan: shadow declarations only for software modes Andrey Konovalov
2020-09-15 21:15   ` Andrey Konovalov
2020-09-15 21:15   ` Andrey Konovalov
2020-09-15 21:15 ` [PATCH v2 04/37] kasan: rename (un)poison_shadow to (un)poison_memory Andrey Konovalov
2020-09-15 21:15   ` Andrey Konovalov
2020-09-15 21:15   ` Andrey Konovalov
2020-09-15 21:15 ` [PATCH v2 05/37] kasan: rename KASAN_SHADOW_* to KASAN_GRANULE_* Andrey Konovalov
2020-09-15 21:15   ` Andrey Konovalov
2020-09-15 21:15   ` Andrey Konovalov
2020-09-18  8:04   ` Alexander Potapenko [this message]
2020-09-18  8:04     ` Alexander Potapenko
2020-09-18  8:04     ` Alexander Potapenko
2020-09-18 10:42     ` Andrey Konovalov
2020-09-18 10:42       ` Andrey Konovalov
2020-09-18 10:42       ` Andrey Konovalov
2020-09-15 21:15 ` [PATCH v2 06/37] kasan: only build init.c for software modes Andrey Konovalov
2020-09-15 21:15   ` Andrey Konovalov
2020-09-15 21:15   ` Andrey Konovalov
2020-09-15 21:15 ` [PATCH v2 07/37] kasan: split out shadow.c from common.c Andrey Konovalov
2020-09-15 21:15   ` Andrey Konovalov
2020-09-15 21:15   ` Andrey Konovalov
2020-09-18  8:17   ` Alexander Potapenko
2020-09-18  8:17     ` Alexander Potapenko
2020-09-18  8:17     ` Alexander Potapenko
2020-09-18 10:39     ` Andrey Konovalov
2020-09-18 10:39       ` Andrey Konovalov
2020-09-18 10:39       ` Andrey Konovalov
2020-09-15 21:15 ` [PATCH v2 08/37] kasan: rename generic/tags_report.c files Andrey Konovalov
2020-09-15 21:15   ` Andrey Konovalov
2020-09-15 21:15   ` Andrey Konovalov
2020-09-15 21:15 ` [PATCH v2 09/37] kasan: don't duplicate config dependencies Andrey Konovalov
2020-09-15 21:15   ` Andrey Konovalov
2020-09-15 21:15   ` Andrey Konovalov
2020-09-15 21:15 ` [PATCH v2 10/37] kasan: hide invalid free check implementation Andrey Konovalov
2020-09-15 21:15   ` Andrey Konovalov
2020-09-15 21:15   ` Andrey Konovalov
2020-09-15 21:15 ` [PATCH v2 11/37] kasan: decode stack frame only with KASAN_STACK_ENABLE Andrey Konovalov
2020-09-15 21:15   ` Andrey Konovalov
2020-09-15 21:15   ` Andrey Konovalov
2020-09-15 21:15 ` [PATCH v2 12/37] kasan, arm64: only init shadow for software modes Andrey Konovalov
2020-09-15 21:15   ` Andrey Konovalov
2020-09-15 21:15   ` Andrey Konovalov
2020-09-17 17:05   ` Catalin Marinas
2020-09-17 17:05     ` Catalin Marinas
2020-09-15 21:15 ` [PATCH v2 13/37] kasan, arm64: only use kasan_depth " Andrey Konovalov
2020-09-15 21:15   ` Andrey Konovalov
2020-09-15 21:15   ` Andrey Konovalov
2020-09-17 17:05   ` Catalin Marinas
2020-09-17 17:05     ` Catalin Marinas
2020-09-15 21:15 ` [PATCH v2 14/37] kasan: rename addr_has_shadow to addr_has_metadata Andrey Konovalov
2020-09-15 21:15   ` Andrey Konovalov
2020-09-15 21:15   ` Andrey Konovalov
2020-09-15 21:15 ` [PATCH v2 15/37] kasan: rename print_shadow_for_address to print_memory_metadata Andrey Konovalov
2020-09-15 21:15   ` Andrey Konovalov
2020-09-15 21:15   ` Andrey Konovalov
2020-09-15 21:15 ` [PATCH v2 16/37] kasan: kasan_non_canonical_hook only for software modes Andrey Konovalov
2020-09-15 21:15   ` Andrey Konovalov
2020-09-15 21:15   ` Andrey Konovalov
2020-09-15 21:15 ` [PATCH v2 17/37] kasan: rename SHADOW layout macros to META Andrey Konovalov
2020-09-15 21:15   ` Andrey Konovalov
2020-09-15 21:15   ` Andrey Konovalov
2020-09-15 21:16 ` [PATCH v2 18/37] kasan: separate metadata_fetch_row for each mode Andrey Konovalov
2020-09-15 21:16   ` Andrey Konovalov
2020-09-15 21:16   ` Andrey Konovalov
2020-09-15 21:16 ` [PATCH v2 19/37] kasan: don't allow SW_TAGS with ARM64_MTE Andrey Konovalov
2020-09-15 21:16   ` Andrey Konovalov
2020-09-15 21:16   ` Andrey Konovalov
2020-09-17 17:05   ` Catalin Marinas
2020-09-17 17:05     ` Catalin Marinas
2020-09-15 21:16 ` [PATCH v2 20/37] kasan: rename tags.c to tags_sw.c Andrey Konovalov
2020-09-15 21:16   ` Andrey Konovalov
2020-09-15 21:16   ` Andrey Konovalov
2020-09-18  9:41   ` Alexander Potapenko
2020-09-18  9:41     ` Alexander Potapenko
2020-09-18  9:41     ` Alexander Potapenko
2020-09-18  9:44     ` Alexander Potapenko
2020-09-18  9:44       ` Alexander Potapenko
2020-09-18  9:44       ` Alexander Potapenko
2020-09-18  9:46       ` Alexander Potapenko
2020-09-18  9:46         ` Alexander Potapenko
2020-09-18  9:46         ` Alexander Potapenko
2020-09-18 10:42         ` Andrey Konovalov
2020-09-18 10:42           ` Andrey Konovalov
2020-09-18 10:42           ` Andrey Konovalov
2020-09-15 21:16 ` [PATCH v2 21/37] kasan: introduce CONFIG_KASAN_HW_TAGS Andrey Konovalov
2020-09-15 21:16   ` Andrey Konovalov
2020-09-15 21:16   ` Andrey Konovalov
2020-09-18 12:32   ` Marco Elver
2020-09-18 12:32     ` Marco Elver
2020-09-18 15:06     ` Andrey Konovalov
2020-09-18 15:06       ` Andrey Konovalov
2020-09-18 15:06       ` Andrey Konovalov
2020-09-18 15:36       ` Marco Elver
2020-09-18 15:36         ` Marco Elver
2020-09-18 15:36         ` Marco Elver
2020-09-18 15:45         ` Andrey Konovalov
2020-09-18 15:45           ` Andrey Konovalov
2020-09-18 15:45           ` Andrey Konovalov
2020-09-15 21:16 ` [PATCH v2 22/37] arm64: mte: Add in-kernel MTE helpers Andrey Konovalov
2020-09-15 21:16   ` Andrey Konovalov
2020-09-15 21:16   ` Andrey Konovalov
2020-09-17 13:46   ` Catalin Marinas
2020-09-17 13:46     ` Catalin Marinas
2020-09-17 14:21     ` Vincenzo Frascino
2020-09-17 14:21       ` Vincenzo Frascino
2020-09-18  9:36       ` Catalin Marinas
2020-09-18  9:36         ` Catalin Marinas
2020-09-22 10:16         ` Vincenzo Frascino
2020-09-22 10:16           ` Vincenzo Frascino
2020-09-17 16:17     ` Vincenzo Frascino
2020-09-17 16:17       ` Vincenzo Frascino
2020-09-17 17:07       ` Catalin Marinas
2020-09-17 17:07         ` Catalin Marinas
2020-09-15 21:16 ` [PATCH v2 23/37] arm64: kasan: Add arch layer for memory tagging helpers Andrey Konovalov
2020-09-15 21:16   ` Andrey Konovalov
2020-09-15 21:16   ` Andrey Konovalov
2020-09-17 17:05   ` Catalin Marinas
2020-09-17 17:05     ` Catalin Marinas
2020-09-18 13:00   ` Marco Elver
2020-09-18 13:00     ` Marco Elver
2020-09-18 14:56     ` Andrey Konovalov
2020-09-18 14:56       ` Andrey Konovalov
2020-09-18 14:56       ` Andrey Konovalov
2020-09-15 21:16 ` [PATCH v2 24/37] arm64: mte: Add in-kernel tag fault handler Andrey Konovalov
2020-09-15 21:16   ` Andrey Konovalov
2020-09-15 21:16   ` Andrey Konovalov
2020-09-17 14:03   ` Catalin Marinas
2020-09-17 14:03     ` Catalin Marinas
2020-09-17 14:24     ` Vincenzo Frascino
2020-09-17 14:24       ` Vincenzo Frascino
2020-09-17 14:59   ` Catalin Marinas
2020-09-17 14:59     ` Catalin Marinas
2020-09-15 21:16 ` [PATCH v2 25/37] arm64: kasan: Enable in-kernel MTE Andrey Konovalov
2020-09-15 21:16   ` Andrey Konovalov
2020-09-15 21:16   ` Andrey Konovalov
2020-09-17 16:35   ` Catalin Marinas
2020-09-17 16:35     ` Catalin Marinas
2020-09-15 21:16 ` [PATCH v2 26/37] arm64: mte: Convert gcr_user into an exclude mask Andrey Konovalov
2020-09-15 21:16   ` Andrey Konovalov
2020-09-15 21:16   ` Andrey Konovalov
2020-09-17 17:06   ` Catalin Marinas
2020-09-17 17:06     ` Catalin Marinas
2020-09-15 21:16 ` [PATCH v2 27/37] arm64: mte: Switch GCR_EL1 in kernel entry and exit Andrey Konovalov
2020-09-15 21:16   ` Andrey Konovalov
2020-09-15 21:16   ` Andrey Konovalov
2020-09-17 16:52   ` Catalin Marinas
2020-09-17 16:52     ` Catalin Marinas
2020-09-17 16:58     ` Catalin Marinas
2020-09-17 16:58       ` Catalin Marinas
2020-09-17 18:47     ` Vincenzo Frascino
2020-09-17 18:47       ` Vincenzo Frascino
2020-09-18  9:39       ` Catalin Marinas
2020-09-18  9:39         ` Catalin Marinas
2020-09-15 21:16 ` [PATCH v2 28/37] arm64: kasan: Enable TBI EL1 Andrey Konovalov
2020-09-15 21:16   ` Andrey Konovalov
2020-09-15 21:16   ` Andrey Konovalov
2020-09-17 16:54   ` Catalin Marinas
2020-09-17 16:54     ` Catalin Marinas
2020-09-15 21:16 ` [PATCH v2 29/37] arm64: kasan: Align allocations for HW_TAGS Andrey Konovalov
2020-09-15 21:16   ` Andrey Konovalov
2020-09-15 21:16   ` Andrey Konovalov
2020-09-17 17:06   ` Catalin Marinas
2020-09-17 17:06     ` Catalin Marinas
2020-09-15 21:16 ` [PATCH v2 30/37] kasan: define KASAN_GRANULE_SIZE " Andrey Konovalov
2020-09-15 21:16   ` Andrey Konovalov
2020-09-15 21:16   ` Andrey Konovalov
2020-09-15 21:16 ` [PATCH v2 31/37] kasan, x86, s390: update undef CONFIG_KASAN Andrey Konovalov
2020-09-15 21:16   ` Andrey Konovalov
2020-09-15 21:16   ` Andrey Konovalov
2020-09-18 10:52   ` Marco Elver
2020-09-18 10:52     ` Marco Elver
2020-09-18 15:07     ` Andrey Konovalov
2020-09-18 15:07       ` Andrey Konovalov
2020-09-18 15:07       ` Andrey Konovalov
2020-09-24 21:35       ` Andrey Konovalov
2020-09-24 21:35         ` Andrey Konovalov
2020-09-24 21:35         ` Andrey Konovalov
2020-09-15 21:16 ` [PATCH v2 32/37] kasan, arm64: expand CONFIG_KASAN checks Andrey Konovalov
2020-09-15 21:16   ` Andrey Konovalov
2020-09-15 21:16   ` Andrey Konovalov
2020-09-17 17:06   ` Catalin Marinas
2020-09-17 17:06     ` Catalin Marinas
2020-09-15 21:16 ` [PATCH v2 33/37] kasan, arm64: implement HW_TAGS runtime Andrey Konovalov
2020-09-15 21:16   ` Andrey Konovalov
2020-09-15 21:16   ` Andrey Konovalov
2020-09-17 17:06   ` Catalin Marinas
2020-09-17 17:06     ` Catalin Marinas
2020-09-18 10:46   ` Marco Elver
2020-09-18 10:46     ` Marco Elver
2020-09-18 12:28     ` Andrey Konovalov
2020-09-18 12:28       ` Andrey Konovalov
2020-09-18 12:28       ` Andrey Konovalov
2020-09-18 12:52   ` Marco Elver
2020-09-18 12:52     ` Marco Elver
2020-09-18 15:00     ` Andrey Konovalov
2020-09-18 15:00       ` Andrey Konovalov
2020-09-18 15:00       ` Andrey Konovalov
2020-09-18 15:19   ` Marco Elver
2020-09-18 15:19     ` Marco Elver
2020-09-18 15:52     ` Andrey Konovalov
2020-09-18 15:52       ` Andrey Konovalov
2020-09-18 15:52       ` Andrey Konovalov
2020-09-15 21:16 ` [PATCH v2 34/37] kasan, arm64: print report from tag fault handler Andrey Konovalov
2020-09-15 21:16   ` Andrey Konovalov
2020-09-15 21:16   ` Andrey Konovalov
2020-09-17 17:04   ` Catalin Marinas
2020-09-17 17:04     ` Catalin Marinas
2020-09-18 12:26     ` Andrey Konovalov
2020-09-18 12:26       ` Andrey Konovalov
2020-09-18 12:26       ` Andrey Konovalov
2020-09-15 21:16 ` [PATCH v2 35/37] kasan, slub: reset tags when accessing metadata Andrey Konovalov
2020-09-15 21:16   ` Andrey Konovalov
2020-09-15 21:16   ` Andrey Konovalov
2020-09-18 14:44   ` Marco Elver
2020-09-18 14:44     ` Marco Elver
2020-09-18 14:55     ` Andrey Konovalov
2020-09-18 14:55       ` Andrey Konovalov
2020-09-18 14:55       ` Andrey Konovalov
2020-09-18 15:29       ` Catalin Marinas
2020-09-18 15:29         ` Catalin Marinas
2020-09-15 21:16 ` [PATCH v2 36/37] kasan, arm64: enable CONFIG_KASAN_HW_TAGS Andrey Konovalov
2020-09-15 21:16   ` Andrey Konovalov
2020-09-15 21:16   ` Andrey Konovalov
2020-09-17 17:04   ` Catalin Marinas
2020-09-17 17:04     ` Catalin Marinas
2020-09-15 21:16 ` [PATCH v2 37/37] kasan: add documentation for hardware tag-based mode Andrey Konovalov
2020-09-15 21:16   ` Andrey Konovalov
2020-09-15 21:16   ` Andrey Konovalov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAG_fn=X8uQoZUXM0cU8NwF41znWiFQS1GjSNtrh5-xM02-nnJw@mail.gmail.com' \
    --to=glider@google.com \
    --cc=Branislav.Rankov@arm.com \
    --cc=akpm@linux-foundation.org \
    --cc=andreyknvl@google.com \
    --cc=aryabinin@virtuozzo.com \
    --cc=catalin.marinas@arm.com \
    --cc=dvyukov@google.com \
    --cc=elver@google.com \
    --cc=eugenis@google.com \
    --cc=kasan-dev@googlegroups.com \
    --cc=kevin.brodsky@arm.com \
    --cc=lenaptr@google.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=vincenzo.frascino@arm.com \
    --cc=will.deacon@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.