All of lore.kernel.org
 help / color / mirror / Atom feed
From: Jann Horn <jannh@google.com>
To: Marco Elver <elver@google.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Alexander Potapenko <glider@google.com>,
	"H . Peter Anvin" <hpa@zytor.com>,
	"Paul E . McKenney" <paulmck@kernel.org>,
	Andrey Konovalov <andreyknvl@google.com>,
	Andrey Ryabinin <aryabinin@virtuozzo.com>,
	Andy Lutomirski <luto@kernel.org>, Borislav Petkov <bp@alien8.de>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Christoph Lameter <cl@linux.com>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	David Rientjes <rientjes@google.com>,
	Dmitry Vyukov <dvyukov@google.com>,
	Eric Dumazet <edumazet@google.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Hillf Danton <hdanton@sina.com>, Ingo Molnar <mingo@redhat.com>,
	Jonathan Cameron <Jonathan.Cameron@huawei.com>,
	Jonathan Corbet <corbet@lwn.net>,
	Joonsoo Kim <iamjoonsoo.kim@lge.com>,
	joern@purestorage.com, Kees Cook <keescook@chromium.org>,
	Mark Rutland <mark.rutland@arm.com>,
	Pekka Enberg <penberg@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>,
	SeongJae Park <sjpark@amazon.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Vlastimil Babka <vbabka@suse.cz>, Will Deacon <will@kernel.org>,
	"the arch/x86 maintainers" <x86@kernel.org>,
	"open list:DOCUMENTATION" <linux-doc@vger.kernel.org>,
	kernel list <linux-kernel@vger.kernel.org>,
	kasan-dev <kasan-dev@googlegroups.com>,
	Linux ARM <linux-arm-kernel@lists.infradead.org>,
	Linux-MM <linux-mm@kvack.org>, SeongJae Park <sjpark@amazon.de>
Subject: Re: [PATCH v6 1/9] mm: add Kernel Electric-Fence infrastructure
Date: Fri, 30 Oct 2020 03:49:12 +0100	[thread overview]
Message-ID: <CAG48ez0TgomTec+r188t0ddYVZtivOkL1DvR3owiuDTBtgPNzA@mail.gmail.com> (raw)
In-Reply-To: <20201029131649.182037-2-elver@google.com>

On Thu, Oct 29, 2020 at 2:17 PM Marco Elver <elver@google.com> wrote:
> This adds the Kernel Electric-Fence (KFENCE) infrastructure. KFENCE is a
> low-overhead sampling-based memory safety error detector of heap
> use-after-free, invalid-free, and out-of-bounds access errors.
[...]
> diff --git a/include/linux/kfence.h b/include/linux/kfence.h
[...]
> +/**
> + * is_kfence_address() - check if an address belongs to KFENCE pool
> + * @addr: address to check
> + *
> + * Return: true or false depending on whether the address is within the KFENCE
> + * object range.
> + *
> + * KFENCE objects live in a separate page range and are not to be intermixed
> + * with regular heap objects (e.g. KFENCE objects must never be added to the
> + * allocator freelists). Failing to do so may and will result in heap
> + * corruptions, therefore is_kfence_address() must be used to check whether
> + * an object requires specific handling.
> + */

It might be worth noting in the comment that this is one of the few
parts of KFENCE that are highly performance-sensitive, since that was
an important point during the review.

> +static __always_inline bool is_kfence_address(const void *addr)
> +{
> +       /*
> +        * The non-NULL check is required in case the __kfence_pool pointer was
> +        * never initialized; keep it in the slow-path after the range-check.
> +        */
> +       return unlikely((unsigned long)((char *)addr - __kfence_pool) < KFENCE_POOL_SIZE && addr);
> +}
[...]
> diff --git a/lib/Kconfig.kfence b/lib/Kconfig.kfence
[...]
> +config KFENCE_STRESS_TEST_FAULTS
> +       int "Stress testing of fault handling and error reporting"
> +       default 0
> +       depends on EXPERT
> +       help
> +         The inverse probability with which to randomly protect KFENCE object
> +         pages, resulting in spurious use-after-frees. The main purpose of
> +         this option is to stress test KFENCE with concurrent error reports
> +         and allocations/frees. A value of 0 disables stress testing logic.
> +
> +         The option is only to test KFENCE; set to 0 if you are unsure.
[...]
> diff --git a/mm/kfence/core.c b/mm/kfence/core.c
[...]
> +#ifndef CONFIG_KFENCE_STRESS_TEST_FAULTS /* Only defined with CONFIG_EXPERT. */
> +#define CONFIG_KFENCE_STRESS_TEST_FAULTS 0
> +#endif

I think you can make this prettier by writing the Kconfig
appropriately. See e.g. ARCH_MMAP_RND_BITS:

config ARCH_MMAP_RND_BITS
  int "Number of bits to use for ASLR of mmap base address" if EXPERT
  range ARCH_MMAP_RND_BITS_MIN ARCH_MMAP_RND_BITS_MAX
  default ARCH_MMAP_RND_BITS_DEFAULT if ARCH_MMAP_RND_BITS_DEFAULT
  default ARCH_MMAP_RND_BITS_MIN
  depends on HAVE_ARCH_MMAP_RND_BITS

So instead of 'depends on EXPERT', I think the proper way would be to
append ' if EXPERT' to the line
'int "Stress testing of fault handling and error reporting"', so that
only whether the option is user-visible depends on EXPERT, and
non-EXPERT configs automatically use the default value.

[...]
> +static inline unsigned long metadata_to_pageaddr(const struct kfence_metadata *meta)
> +{
> +       unsigned long offset = (meta - kfence_metadata + 1) * PAGE_SIZE * 2;
> +       unsigned long pageaddr = (unsigned long)&__kfence_pool[offset];
> +
> +       /* The checks do not affect performance; only called from slow-paths. */
> +
> +       /* Only call with a pointer into kfence_metadata. */
> +       if (KFENCE_WARN_ON(meta < kfence_metadata ||
> +                          meta >= kfence_metadata + CONFIG_KFENCE_NUM_OBJECTS))
> +               return 0;
> +
> +       /*
> +        * This metadata object only ever maps to 1 page; verify the calculation
> +        * happens and that the stored address was not corrupted.

nit: This reads a bit weirdly to me. Maybe "; verify that the stored
address is in the expected range"? But feel free to leave it as-is if
you prefer it that way.

> +        */
> +       if (KFENCE_WARN_ON(ALIGN_DOWN(meta->addr, PAGE_SIZE) != pageaddr))
> +               return 0;
> +
> +       return pageaddr;
> +}
[...]
> +/* __always_inline this to ensure we won't do an indirect call to fn. */
> +static __always_inline void for_each_canary(const struct kfence_metadata *meta, bool (*fn)(u8 *))
> +{
> +       const unsigned long pageaddr = ALIGN_DOWN(meta->addr, PAGE_SIZE);
> +       unsigned long addr;
> +
> +       lockdep_assert_held(&meta->lock);
> +
> +       /* Check left of object. */
> +       for (addr = pageaddr; addr < meta->addr; addr++) {
> +               if (!fn((u8 *)addr))
> +                       break;

It could be argued that "return" instead of "break" would be cleaner
here if the API is supposed to be "invoke fn() on each canary byte,
but stop when fn() returns false". But I suppose it doesn't really
matter, so either way is fine.

> +       }
> +
> +       /* Check right of object. */
> +       for (addr = meta->addr + meta->size; addr < pageaddr + PAGE_SIZE; addr++) {
> +               if (!fn((u8 *)addr))
> +                       break;
> +       }
> +}
> +
> +static void *kfence_guarded_alloc(struct kmem_cache *cache, size_t size, gfp_t gfp)
> +{
[...]
> +       /* Set required struct page fields. */
> +       page = virt_to_page(meta->addr);
> +       page->slab_cache = cache;
> +       if (IS_ENABLED(CONFIG_SLUB))
> +               page->objects = 1;
> +       if (IS_ENABLED(CONFIG_SLAB))
> +               page->s_mem = addr;

Maybe move the last 4 lines over into the "hooks for SLAB" and "hooks
for SLUB" patches?

[...]
> +}
[...]
> diff --git a/mm/kfence/report.c b/mm/kfence/report.c
[...]
> +/*
> + * Get the number of stack entries to skip get out of MM internals. @type is

s/to skip get out/to skip to get out/ ?

> + * optional, and if set to NULL, assumes an allocation or free stack.
> + */
> +static int get_stack_skipnr(const unsigned long stack_entries[], int num_entries,
> +                           const enum kfence_error_type *type)
[...]
> +void kfence_report_error(unsigned long address, const struct kfence_metadata *meta,
> +                        enum kfence_error_type type)
> +{
[...]
> +       case KFENCE_ERROR_CORRUPTION: {
> +               size_t bytes_to_show = 16;
> +
> +               pr_err("BUG: KFENCE: memory corruption in %pS\n\n", (void *)stack_entries[skipnr]);
> +               pr_err("Corrupted memory at 0x" PTR_FMT " ", (void *)address);
> +
> +               if (address < meta->addr)
> +                       bytes_to_show = min(bytes_to_show, meta->addr - address);
> +               print_diff_canary((u8 *)address, bytes_to_show);

If the object was located on the right side, but with 1 byte padding
to the right due to alignment, and a 1-byte OOB write had clobbered
the canary byte on the right side, we would later detect a
KFENCE_ERROR_CORRUPTION at offset 0xfff inside the page, right? In
that case, I think we'd end up trying to read 15 canary bytes from the
following guard page and take a page fault?

You may want to do something like:

unsigned long canary_end = (address < meta->addr) ? meta->addr :
address | (PAGE_SIZE-1);
bytes_to_show = min(bytes_to_show, canary_end);



> +               pr_cont(" (in kfence-#%zd):\n", object_index);
> +               break;
> +       }

WARNING: multiple messages have this Message-ID (diff)
From: Jann Horn <jannh@google.com>
To: Marco Elver <elver@google.com>
Cc: Mark Rutland <mark.rutland@arm.com>,
	Hillf Danton <hdanton@sina.com>,
	"open list:DOCUMENTATION" <linux-doc@vger.kernel.org>,
	Peter Zijlstra <peterz@infradead.org>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	SeongJae Park <sjpark@amazon.de>, Linux-MM <linux-mm@kvack.org>,
	Eric Dumazet <edumazet@google.com>,
	Alexander Potapenko <glider@google.com>,
	"H . Peter Anvin" <hpa@zytor.com>,
	Christoph Lameter <cl@linux.com>, Will Deacon <will@kernel.org>,
	SeongJae Park <sjpark@amazon.com>,
	Jonathan Corbet <corbet@lwn.net>,
	the arch/x86 maintainers <x86@kernel.org>,
	kasan-dev <kasan-dev@googlegroups.com>,
	Ingo Molnar <mingo@redhat.com>, Vlastimil Babka <vbabka@suse.cz>,
	David Rientjes <rientjes@google.com>,
	Andrey Ryabinin <aryabinin@virtuozzo.com>,
	joern@purestorage.com, Kees Cook <keescook@chromium.org>,
	"Paul E . McKenney" <paulmck@kernel.org>,
	Andrey Konovalov <andreyknvl@google.com>,
	Borislav Petkov <bp@alien8.de>, Andy Lutomirski <luto@kernel.org>,
	Jonathan Cameron <Jonathan.Cameron@huawei.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Andrew Morton <akpm@linux-foundation.org>,
	Dmitry Vyukov <dvyukov@google.com>,
	Linux ARM <linux-arm-kernel@lists.infradead.org>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	kernel list <linux-kernel@vger.kernel.org>,
	Pekka Enberg <penberg@kernel.org>,
	Joonsoo Kim <iamjoonsoo.kim@lge.com>
Subject: Re: [PATCH v6 1/9] mm: add Kernel Electric-Fence infrastructure
Date: Fri, 30 Oct 2020 03:49:12 +0100	[thread overview]
Message-ID: <CAG48ez0TgomTec+r188t0ddYVZtivOkL1DvR3owiuDTBtgPNzA@mail.gmail.com> (raw)
In-Reply-To: <20201029131649.182037-2-elver@google.com>

On Thu, Oct 29, 2020 at 2:17 PM Marco Elver <elver@google.com> wrote:
> This adds the Kernel Electric-Fence (KFENCE) infrastructure. KFENCE is a
> low-overhead sampling-based memory safety error detector of heap
> use-after-free, invalid-free, and out-of-bounds access errors.
[...]
> diff --git a/include/linux/kfence.h b/include/linux/kfence.h
[...]
> +/**
> + * is_kfence_address() - check if an address belongs to KFENCE pool
> + * @addr: address to check
> + *
> + * Return: true or false depending on whether the address is within the KFENCE
> + * object range.
> + *
> + * KFENCE objects live in a separate page range and are not to be intermixed
> + * with regular heap objects (e.g. KFENCE objects must never be added to the
> + * allocator freelists). Failing to do so may and will result in heap
> + * corruptions, therefore is_kfence_address() must be used to check whether
> + * an object requires specific handling.
> + */

It might be worth noting in the comment that this is one of the few
parts of KFENCE that are highly performance-sensitive, since that was
an important point during the review.

> +static __always_inline bool is_kfence_address(const void *addr)
> +{
> +       /*
> +        * The non-NULL check is required in case the __kfence_pool pointer was
> +        * never initialized; keep it in the slow-path after the range-check.
> +        */
> +       return unlikely((unsigned long)((char *)addr - __kfence_pool) < KFENCE_POOL_SIZE && addr);
> +}
[...]
> diff --git a/lib/Kconfig.kfence b/lib/Kconfig.kfence
[...]
> +config KFENCE_STRESS_TEST_FAULTS
> +       int "Stress testing of fault handling and error reporting"
> +       default 0
> +       depends on EXPERT
> +       help
> +         The inverse probability with which to randomly protect KFENCE object
> +         pages, resulting in spurious use-after-frees. The main purpose of
> +         this option is to stress test KFENCE with concurrent error reports
> +         and allocations/frees. A value of 0 disables stress testing logic.
> +
> +         The option is only to test KFENCE; set to 0 if you are unsure.
[...]
> diff --git a/mm/kfence/core.c b/mm/kfence/core.c
[...]
> +#ifndef CONFIG_KFENCE_STRESS_TEST_FAULTS /* Only defined with CONFIG_EXPERT. */
> +#define CONFIG_KFENCE_STRESS_TEST_FAULTS 0
> +#endif

I think you can make this prettier by writing the Kconfig
appropriately. See e.g. ARCH_MMAP_RND_BITS:

config ARCH_MMAP_RND_BITS
  int "Number of bits to use for ASLR of mmap base address" if EXPERT
  range ARCH_MMAP_RND_BITS_MIN ARCH_MMAP_RND_BITS_MAX
  default ARCH_MMAP_RND_BITS_DEFAULT if ARCH_MMAP_RND_BITS_DEFAULT
  default ARCH_MMAP_RND_BITS_MIN
  depends on HAVE_ARCH_MMAP_RND_BITS

So instead of 'depends on EXPERT', I think the proper way would be to
append ' if EXPERT' to the line
'int "Stress testing of fault handling and error reporting"', so that
only whether the option is user-visible depends on EXPERT, and
non-EXPERT configs automatically use the default value.

[...]
> +static inline unsigned long metadata_to_pageaddr(const struct kfence_metadata *meta)
> +{
> +       unsigned long offset = (meta - kfence_metadata + 1) * PAGE_SIZE * 2;
> +       unsigned long pageaddr = (unsigned long)&__kfence_pool[offset];
> +
> +       /* The checks do not affect performance; only called from slow-paths. */
> +
> +       /* Only call with a pointer into kfence_metadata. */
> +       if (KFENCE_WARN_ON(meta < kfence_metadata ||
> +                          meta >= kfence_metadata + CONFIG_KFENCE_NUM_OBJECTS))
> +               return 0;
> +
> +       /*
> +        * This metadata object only ever maps to 1 page; verify the calculation
> +        * happens and that the stored address was not corrupted.

nit: This reads a bit weirdly to me. Maybe "; verify that the stored
address is in the expected range"? But feel free to leave it as-is if
you prefer it that way.

> +        */
> +       if (KFENCE_WARN_ON(ALIGN_DOWN(meta->addr, PAGE_SIZE) != pageaddr))
> +               return 0;
> +
> +       return pageaddr;
> +}
[...]
> +/* __always_inline this to ensure we won't do an indirect call to fn. */
> +static __always_inline void for_each_canary(const struct kfence_metadata *meta, bool (*fn)(u8 *))
> +{
> +       const unsigned long pageaddr = ALIGN_DOWN(meta->addr, PAGE_SIZE);
> +       unsigned long addr;
> +
> +       lockdep_assert_held(&meta->lock);
> +
> +       /* Check left of object. */
> +       for (addr = pageaddr; addr < meta->addr; addr++) {
> +               if (!fn((u8 *)addr))
> +                       break;

It could be argued that "return" instead of "break" would be cleaner
here if the API is supposed to be "invoke fn() on each canary byte,
but stop when fn() returns false". But I suppose it doesn't really
matter, so either way is fine.

> +       }
> +
> +       /* Check right of object. */
> +       for (addr = meta->addr + meta->size; addr < pageaddr + PAGE_SIZE; addr++) {
> +               if (!fn((u8 *)addr))
> +                       break;
> +       }
> +}
> +
> +static void *kfence_guarded_alloc(struct kmem_cache *cache, size_t size, gfp_t gfp)
> +{
[...]
> +       /* Set required struct page fields. */
> +       page = virt_to_page(meta->addr);
> +       page->slab_cache = cache;
> +       if (IS_ENABLED(CONFIG_SLUB))
> +               page->objects = 1;
> +       if (IS_ENABLED(CONFIG_SLAB))
> +               page->s_mem = addr;

Maybe move the last 4 lines over into the "hooks for SLAB" and "hooks
for SLUB" patches?

[...]
> +}
[...]
> diff --git a/mm/kfence/report.c b/mm/kfence/report.c
[...]
> +/*
> + * Get the number of stack entries to skip get out of MM internals. @type is

s/to skip get out/to skip to get out/ ?

> + * optional, and if set to NULL, assumes an allocation or free stack.
> + */
> +static int get_stack_skipnr(const unsigned long stack_entries[], int num_entries,
> +                           const enum kfence_error_type *type)
[...]
> +void kfence_report_error(unsigned long address, const struct kfence_metadata *meta,
> +                        enum kfence_error_type type)
> +{
[...]
> +       case KFENCE_ERROR_CORRUPTION: {
> +               size_t bytes_to_show = 16;
> +
> +               pr_err("BUG: KFENCE: memory corruption in %pS\n\n", (void *)stack_entries[skipnr]);
> +               pr_err("Corrupted memory at 0x" PTR_FMT " ", (void *)address);
> +
> +               if (address < meta->addr)
> +                       bytes_to_show = min(bytes_to_show, meta->addr - address);
> +               print_diff_canary((u8 *)address, bytes_to_show);

If the object was located on the right side, but with 1 byte padding
to the right due to alignment, and a 1-byte OOB write had clobbered
the canary byte on the right side, we would later detect a
KFENCE_ERROR_CORRUPTION at offset 0xfff inside the page, right? In
that case, I think we'd end up trying to read 15 canary bytes from the
following guard page and take a page fault?

You may want to do something like:

unsigned long canary_end = (address < meta->addr) ? meta->addr :
address | (PAGE_SIZE-1);
bytes_to_show = min(bytes_to_show, canary_end);



> +               pr_cont(" (in kfence-#%zd):\n", object_index);
> +               break;
> +       }

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  reply	other threads:[~2020-10-30  2:49 UTC|newest]

Thread overview: 87+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-10-29 13:16 [PATCH v6 0/9] KFENCE: A low-overhead sampling-based memory safety error detector Marco Elver
2020-10-29 13:16 ` Marco Elver
2020-10-29 13:16 ` Marco Elver
2020-10-29 13:16 ` [PATCH v6 1/9] mm: add Kernel Electric-Fence infrastructure Marco Elver
2020-10-29 13:16   ` Marco Elver
2020-10-29 13:16   ` Marco Elver
2020-10-30  2:49   ` Jann Horn [this message]
2020-10-30  2:49     ` Jann Horn
2020-10-30  2:49     ` Jann Horn
2020-10-30 19:16     ` Marco Elver
2020-10-30 19:16       ` Marco Elver
2020-10-29 13:16 ` [PATCH v6 2/9] x86, kfence: enable KFENCE for x86 Marco Elver
2020-10-29 13:16   ` Marco Elver
2020-10-29 13:16   ` Marco Elver
2020-10-30  2:49   ` Jann Horn
2020-10-30  2:49     ` Jann Horn
2020-10-30  2:49     ` Jann Horn
2020-10-30 13:00     ` Marco Elver
2020-10-30 13:00       ` Marco Elver
2020-10-30 15:22       ` Jann Horn
2020-10-30 15:22         ` Jann Horn
2020-10-29 13:16 ` [PATCH v6 3/9] arm64, kfence: enable KFENCE for ARM64 Marco Elver
2020-10-29 13:16   ` Marco Elver
2020-10-29 13:16   ` Marco Elver
2020-10-30  2:49   ` Jann Horn
2020-10-30  2:49     ` Jann Horn
2020-10-30  2:49     ` Jann Horn
2020-10-30 16:00     ` Mark Rutland
2020-10-30 16:00       ` Mark Rutland
2020-10-30 15:47   ` Mark Rutland
2020-10-30 15:47     ` Mark Rutland
2020-10-30 15:54     ` Marco Elver
2020-10-30 15:54       ` Marco Elver
2020-10-29 13:16 ` [PATCH v6 4/9] mm, kfence: insert KFENCE hooks for SLAB Marco Elver
2020-10-29 13:16   ` Marco Elver
2020-10-29 13:16   ` Marco Elver
2020-10-30  2:49   ` Jann Horn
2020-10-30  2:49     ` Jann Horn
2020-10-30  2:49     ` Jann Horn
2020-10-30 15:41     ` Marco Elver
2020-10-30 15:41       ` Marco Elver
2020-10-29 13:16 ` [PATCH v6 5/9] mm, kfence: insert KFENCE hooks for SLUB Marco Elver
2020-10-29 13:16   ` Marco Elver
2020-10-29 13:16   ` Marco Elver
2020-10-30  2:49   ` Jann Horn
2020-10-30  2:49     ` Jann Horn
2020-10-30  2:49     ` Jann Horn
2020-10-29 13:16 ` [PATCH v6 6/9] kfence, kasan: make KFENCE compatible with KASAN Marco Elver
2020-10-29 13:16   ` Marco Elver
2020-10-29 13:16   ` Marco Elver
2020-10-30  2:49   ` Jann Horn
2020-10-30  2:49     ` Jann Horn
2020-10-30  2:49     ` Jann Horn
2020-10-30 13:46     ` Marco Elver
2020-10-30 13:46       ` Marco Elver
2020-10-30 15:08       ` Jann Horn
2020-10-30 15:08         ` Jann Horn
2020-10-30 15:19         ` Marco Elver
2020-10-30 15:19           ` Marco Elver
2020-10-29 13:16 ` [PATCH v6 7/9] kfence, Documentation: add KFENCE documentation Marco Elver
2020-10-29 13:16   ` Marco Elver
2020-10-29 13:16   ` Marco Elver
2020-10-30  2:49   ` Jann Horn
2020-10-30  2:49     ` Jann Horn
2020-10-30  2:49     ` Jann Horn
2020-10-30  9:59     ` Alexander Potapenko
2020-10-30  9:59       ` Alexander Potapenko
2020-10-30  9:59       ` Alexander Potapenko
2020-10-29 13:16 ` [PATCH v6 8/9] kfence: add test suite Marco Elver
2020-10-29 13:16   ` Marco Elver
2020-10-29 13:16   ` Marco Elver
2020-10-30  2:49   ` Jann Horn
2020-10-30  2:49     ` Jann Horn
2020-10-30  2:49     ` Jann Horn
2020-10-30 10:50     ` Marco Elver
2020-10-30 10:50       ` Marco Elver
2020-10-29 13:16 ` [PATCH v6 9/9] MAINTAINERS: Add entry for KFENCE Marco Elver
2020-10-29 13:16   ` Marco Elver
2020-10-29 13:16   ` Marco Elver
2020-10-30  2:50   ` Jann Horn
2020-10-30  2:50     ` Jann Horn
2020-10-30  2:50     ` Jann Horn
2020-10-30  2:49 ` [PATCH v6 0/9] KFENCE: A low-overhead sampling-based memory safety error detector Jann Horn
2020-10-30  2:49   ` Jann Horn
2020-10-30  2:49   ` Jann Horn
2020-10-30 10:56   ` Marco Elver
2020-10-30 10:56     ` Marco Elver

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAG48ez0TgomTec+r188t0ddYVZtivOkL1DvR3owiuDTBtgPNzA@mail.gmail.com \
    --to=jannh@google.com \
    --cc=Jonathan.Cameron@huawei.com \
    --cc=akpm@linux-foundation.org \
    --cc=andreyknvl@google.com \
    --cc=aryabinin@virtuozzo.com \
    --cc=bp@alien8.de \
    --cc=catalin.marinas@arm.com \
    --cc=cl@linux.com \
    --cc=corbet@lwn.net \
    --cc=dave.hansen@linux.intel.com \
    --cc=dvyukov@google.com \
    --cc=edumazet@google.com \
    --cc=elver@google.com \
    --cc=glider@google.com \
    --cc=gregkh@linuxfoundation.org \
    --cc=hdanton@sina.com \
    --cc=hpa@zytor.com \
    --cc=iamjoonsoo.kim@lge.com \
    --cc=joern@purestorage.com \
    --cc=kasan-dev@googlegroups.com \
    --cc=keescook@chromium.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=luto@kernel.org \
    --cc=mark.rutland@arm.com \
    --cc=mingo@redhat.com \
    --cc=paulmck@kernel.org \
    --cc=penberg@kernel.org \
    --cc=peterz@infradead.org \
    --cc=rientjes@google.com \
    --cc=sjpark@amazon.com \
    --cc=sjpark@amazon.de \
    --cc=tglx@linutronix.de \
    --cc=vbabka@suse.cz \
    --cc=will@kernel.org \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.