All of lore.kernel.org
 help / color / mirror / Atom feed
From: Marco Elver <elver@google.com>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: Alexander Potapenko <glider@google.com>,
	Dmitry Vyukov <dvyukov@google.com>, Jann Horn <jannh@google.com>,
	Aleksandr Nogikh <nogikh@google.com>,
	Taras Madan <tarasmadan@google.com>,
	LKML <linux-kernel@vger.kernel.org>,
	Linux Memory Management List <linux-mm@kvack.org>,
	kasan-dev <kasan-dev@googlegroups.com>
Subject: Re: [PATCH v3 4/5] kfence: limit currently covered allocations when pool nearly full
Date: Fri, 24 Sep 2021 15:01:56 +0200	[thread overview]
Message-ID: <YU3MRGaCaJiYht5g@elver.google.com> (raw)
In-Reply-To: <20210923162811.3cc8188d6a30d9eed2375468@linux-foundation.org>

On Thu, Sep 23, 2021 at 04:28PM -0700, Andrew Morton wrote:
> On Thu, 23 Sep 2021 15:44:10 +0200 Marco Elver <elver@google.com> wrote:
[...]
> > I'm worried about next_pseudo_random32() changing their implementation
> > to longer be deterministic or change in other ways that break our
> > usecase. In this case we want pseudorandomness, but we're not
> > implementing a PRNG.
> > 
> > Open-coding the constants (given they are from "Numerical Recipes") is
> > more reliable and doesn't introduce unwanted reliance on
> > next_pseudo_random32()'s behaviour.
> 
> Perhaps we could summarize this in an additional comment?

Hmm, on second thought, while trying to write the comment realized it's
unnecessary altogether. I've switched to just using hash_32() which is
probably better suited.

> Also, this:
> 
> +static u32 get_alloc_stack_hash(unsigned long *stack_entries, size_t num_entries)
> +{
> +	/* Some randomness across reboots / different machines. */
> +	u32 seed = (u32)((unsigned long)__kfence_pool >> (BITS_PER_LONG - 32));
> 
> seems a bit weak.  Would it be better to seed this at boot time with
> a randomish number?

Sure, makes sense.

Both fixes are included in the below fixup. (Let me know if resending as
v4 is better, but I've seen the patches already appeared in -mm.)

Thank you!

-- Marco

------ >8 ------

From: Marco Elver <elver@google.com>
Date: Fri, 24 Sep 2021 14:17:38 +0200
Subject: [PATCH] fixup! kfence: limit currently covered allocations when pool
 nearly full

* Simplify and just use hash_32().
* Use more random stack_hash_seed.

Signed-off-by: Marco Elver <elver@google.com>
---
 mm/kfence/core.c | 18 ++++++++++++------
 1 file changed, 12 insertions(+), 6 deletions(-)

diff --git a/mm/kfence/core.c b/mm/kfence/core.c
index 58a0f6f1acc5..545999d04af4 100644
--- a/mm/kfence/core.c
+++ b/mm/kfence/core.c
@@ -10,6 +10,7 @@
 #include <linux/atomic.h>
 #include <linux/bug.h>
 #include <linux/debugfs.h>
+#include <linux/hash.h>
 #include <linux/irq_work.h>
 #include <linux/jhash.h>
 #include <linux/kcsan-checks.h>
@@ -122,14 +123,21 @@ atomic_t kfence_allocation_gate = ATOMIC_INIT(1);
  *	P(alloc_traces) = (1 - e^(-HNUM * (alloc_traces / SIZE)) ^ HNUM
  */
 #define ALLOC_COVERED_HNUM	2
-#define ALLOC_COVERED_SIZE	(1 << (const_ilog2(CONFIG_KFENCE_NUM_OBJECTS) + 2))
-#define ALLOC_COVERED_HNEXT(h)	(1664525 * (h) + 1013904223)
+#define ALLOC_COVERED_ORDER	(const_ilog2(CONFIG_KFENCE_NUM_OBJECTS) + 2)
+#define ALLOC_COVERED_SIZE	(1 << ALLOC_COVERED_ORDER)
+#define ALLOC_COVERED_HNEXT(h)	hash_32(h, ALLOC_COVERED_ORDER)
 #define ALLOC_COVERED_MASK	(ALLOC_COVERED_SIZE - 1)
 static atomic_t alloc_covered[ALLOC_COVERED_SIZE];
 
 /* Stack depth used to determine uniqueness of an allocation. */
 #define UNIQUE_ALLOC_STACK_DEPTH 8UL
 
+/*
+ * Randomness for stack hashes, making the same collisions across reboots and
+ * different machines less likely.
+ */
+static u32 stack_hash_seed __ro_after_init;
+
 /* Statistics counters for debugfs. */
 enum kfence_counter_id {
 	KFENCE_COUNTER_ALLOCATED,
@@ -166,12 +174,9 @@ static inline bool should_skip_covered(void)
 
 static u32 get_alloc_stack_hash(unsigned long *stack_entries, size_t num_entries)
 {
-	/* Some randomness across reboots / different machines. */
-	u32 seed = (u32)((unsigned long)__kfence_pool >> (BITS_PER_LONG - 32));
-
 	num_entries = min(num_entries, UNIQUE_ALLOC_STACK_DEPTH);
 	num_entries = filter_irq_stacks(stack_entries, num_entries);
-	return jhash(stack_entries, num_entries * sizeof(stack_entries[0]), seed);
+	return jhash(stack_entries, num_entries * sizeof(stack_entries[0]), stack_hash_seed);
 }
 
 /*
@@ -759,6 +764,7 @@ void __init kfence_init(void)
 	if (!kfence_sample_interval)
 		return;
 
+	stack_hash_seed = (u32)random_get_entropy();
 	if (!kfence_init_pool()) {
 		pr_err("%s failed\n", __func__);
 		return;
-- 
2.33.0.685.g46640cef36-goog


  reply	other threads:[~2021-09-24 13:26 UTC|newest]

Thread overview: 28+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-09-23 10:47 [PATCH v3 1/5] stacktrace: move filter_irq_stacks() to kernel/stacktrace.c Marco Elver
2021-09-23 10:47 ` Marco Elver
2021-09-23 10:48 ` [PATCH v3 2/5] kfence: count unexpectedly skipped allocations Marco Elver
2021-09-23 10:48   ` Marco Elver
2021-09-23 11:15   ` Alexander Potapenko
2021-09-23 11:15     ` Alexander Potapenko
2021-09-23 10:48 ` [PATCH v3 3/5] kfence: move saving stack trace of allocations into __kfence_alloc() Marco Elver
2021-09-23 10:48   ` Marco Elver
2021-09-23 11:32   ` Alexander Potapenko
2021-09-23 11:32     ` Alexander Potapenko
2021-09-23 10:48 ` [PATCH v3 4/5] kfence: limit currently covered allocations when pool nearly full Marco Elver
2021-09-23 10:48   ` Marco Elver
2021-09-23 11:18   ` Dmitry Vyukov
2021-09-23 11:18     ` Dmitry Vyukov
2021-09-23 13:23     ` Alexander Potapenko
2021-09-23 13:23       ` Alexander Potapenko
2021-09-23 13:44       ` Marco Elver
2021-09-23 13:44         ` Marco Elver
2021-09-23 13:46         ` Alexander Potapenko
2021-09-23 13:46           ` Alexander Potapenko
2021-09-23 23:28         ` Andrew Morton
2021-09-24 13:01           ` Marco Elver [this message]
2021-09-23 10:48 ` [PATCH v3 5/5] kfence: add note to documentation about skipping covered allocations Marco Elver
2021-09-23 10:48   ` Marco Elver
2021-09-23 15:46   ` Alexander Potapenko
2021-09-23 15:46     ` Alexander Potapenko
2021-09-23 11:14 ` [PATCH v3 1/5] stacktrace: move filter_irq_stacks() to kernel/stacktrace.c Alexander Potapenko
2021-09-23 11:14   ` Alexander Potapenko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YU3MRGaCaJiYht5g@elver.google.com \
    --to=elver@google.com \
    --cc=akpm@linux-foundation.org \
    --cc=dvyukov@google.com \
    --cc=glider@google.com \
    --cc=jannh@google.com \
    --cc=kasan-dev@googlegroups.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=nogikh@google.com \
    --cc=tarasmadan@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.