All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 00/18] lib/stackdepot: fixes and clean-ups
@ 2023-01-30 20:49 andrey.konovalov
  2023-01-30 20:49 ` [PATCH 01/18] lib/stackdepot: fix setting next_slab_inited in init_stack_slab andrey.konovalov
                   ` (17 more replies)
  0 siblings, 18 replies; 51+ messages in thread
From: andrey.konovalov @ 2023-01-30 20:49 UTC (permalink / raw)
  To: Marco Elver, Alexander Potapenko
  Cc: Andrey Konovalov, Vlastimil Babka, kasan-dev, Evgenii Stepanov,
	Andrew Morton, linux-mm, linux-kernel, Andrey Konovalov

From: Andrey Konovalov <andreyknvl@google.com>

A set of fixes, comments, and clean-ups I came up with while reading
the stack depot code.

The only fix that might be worth backporting to stable kernels is
in the first patch.

Andrey Konovalov (18):
  lib/stackdepot: fix setting next_slab_inited in init_stack_slab
  lib/stackdepot: put functions in logical order
  lib/stackdepot: use pr_fmt to define message format
  lib/stackdepot, mm: rename stack_depot_want_early_init
  lib/stackdepot: rename stack_depot_disable
  lib/stackdepot: annotate init and early init functions
  lib/stackdepot: lower the indentation in stack_depot_init
  lib/stackdepot: reorder and annotate global variables
  lib/stackdepot: rename hash table constants and variables
  lib/stackdepot: rename init_stack_slab
  lib/stackdepot: rename slab variables
  lib/stackdepot: rename handle and slab constants
  lib/stacktrace: drop impossible WARN_ON for depot_init_slab
  lib/stackdepot: annotate depot_init_slab and depot_alloc_stack
  lib/stacktrace, kasan, kmsan: rework extra_bits interface
  lib/stackdepot: annotate racy slab_index accesses
  lib/stackdepot: various comments clean-ups
  lib/stackdepot: move documentation comments to stackdepot.h

 include/linux/stackdepot.h | 152 +++++++--
 lib/stackdepot.c           | 628 ++++++++++++++++++-------------------
 mm/kasan/common.c          |   2 +-
 mm/kmsan/core.c            |  10 +-
 mm/page_owner.c            |   2 +-
 mm/slub.c                  |   4 +-
 6 files changed, 435 insertions(+), 363 deletions(-)

-- 
2.25.1


^ permalink raw reply	[flat|nested] 51+ messages in thread

* [PATCH 01/18] lib/stackdepot: fix setting next_slab_inited in init_stack_slab
  2023-01-30 20:49 [PATCH 00/18] lib/stackdepot: fixes and clean-ups andrey.konovalov
@ 2023-01-30 20:49 ` andrey.konovalov
  2023-01-31  0:18   ` Andrew Morton
                     ` (2 more replies)
  2023-01-30 20:49 ` [PATCH 02/18] lib/stackdepot: put functions in logical order andrey.konovalov
                   ` (16 subsequent siblings)
  17 siblings, 3 replies; 51+ messages in thread
From: andrey.konovalov @ 2023-01-30 20:49 UTC (permalink / raw)
  To: Marco Elver, Alexander Potapenko
  Cc: Andrey Konovalov, Vlastimil Babka, kasan-dev, Evgenii Stepanov,
	Andrew Morton, linux-mm, linux-kernel, Andrey Konovalov

From: Andrey Konovalov <andreyknvl@google.com>

In commit 305e519ce48e ("lib/stackdepot.c: fix global out-of-bounds in
stack_slabs"), init_stack_slab was changed to only use preallocated
memory for the next slab if the slab number limit is not reached.
However, setting next_slab_inited was not moved together with updating
stack_slabs.

Set next_slab_inited only if the preallocated memory was used for the
next slab.

Fixes: 305e519ce48e ("lib/stackdepot.c: fix global out-of-bounds in stack_slabs")
Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
 lib/stackdepot.c | 11 ++++++-----
 1 file changed, 6 insertions(+), 5 deletions(-)

diff --git a/lib/stackdepot.c b/lib/stackdepot.c
index 79e894cf8406..0eed9bbcf23e 100644
--- a/lib/stackdepot.c
+++ b/lib/stackdepot.c
@@ -105,12 +105,13 @@ static bool init_stack_slab(void **prealloc)
 		if (depot_index + 1 < STACK_ALLOC_MAX_SLABS) {
 			stack_slabs[depot_index + 1] = *prealloc;
 			*prealloc = NULL;
+			/*
+			 * This smp_store_release pairs with smp_load_acquire()
+			 * from |next_slab_inited| above and in
+			 * stack_depot_save().
+			 */
+			smp_store_release(&next_slab_inited, 1);
 		}
-		/*
-		 * This smp_store_release pairs with smp_load_acquire() from
-		 * |next_slab_inited| above and in stack_depot_save().
-		 */
-		smp_store_release(&next_slab_inited, 1);
 	}
 	return true;
 }
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [PATCH 02/18] lib/stackdepot: put functions in logical order
  2023-01-30 20:49 [PATCH 00/18] lib/stackdepot: fixes and clean-ups andrey.konovalov
  2023-01-30 20:49 ` [PATCH 01/18] lib/stackdepot: fix setting next_slab_inited in init_stack_slab andrey.konovalov
@ 2023-01-30 20:49 ` andrey.konovalov
  2023-01-31 10:20   ` Alexander Potapenko
  2023-01-30 20:49 ` [PATCH 03/18] lib/stackdepot: use pr_fmt to define message format andrey.konovalov
                   ` (15 subsequent siblings)
  17 siblings, 1 reply; 51+ messages in thread
From: andrey.konovalov @ 2023-01-30 20:49 UTC (permalink / raw)
  To: Marco Elver, Alexander Potapenko
  Cc: Andrey Konovalov, Vlastimil Babka, kasan-dev, Evgenii Stepanov,
	Andrew Morton, linux-mm, linux-kernel, Andrey Konovalov

From: Andrey Konovalov <andreyknvl@google.com>

Put stack depot functions' declarations and definitions in a more logical
order:

1. Functions that save stack traces into stack depot.
2. Functions that fetch and print stack traces.
3. stack_depot_get_extra_bits that operates on stack depot handles
   and does not interact with the stack depot storage.

No functional changes.

Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
 include/linux/stackdepot.h |  15 +-
 lib/stackdepot.c           | 316 ++++++++++++++++++-------------------
 2 files changed, 166 insertions(+), 165 deletions(-)

diff --git a/include/linux/stackdepot.h b/include/linux/stackdepot.h
index 9ca7798d7a31..1296a6eeaec0 100644
--- a/include/linux/stackdepot.h
+++ b/include/linux/stackdepot.h
@@ -14,17 +14,13 @@
 #include <linux/gfp.h>
 
 typedef u32 depot_stack_handle_t;
+
 /*
  * Number of bits in the handle that stack depot doesn't use. Users may store
  * information in them.
  */
 #define STACK_DEPOT_EXTRA_BITS 5
 
-depot_stack_handle_t __stack_depot_save(unsigned long *entries,
-					unsigned int nr_entries,
-					unsigned int extra_bits,
-					gfp_t gfp_flags, bool can_alloc);
-
 /*
  * Every user of stack depot has to call stack_depot_init() during its own init
  * when it's decided that it will be calling stack_depot_save() later. This is
@@ -59,17 +55,22 @@ static inline void stack_depot_want_early_init(void) { }
 static inline int stack_depot_early_init(void)	{ return 0; }
 #endif
 
+depot_stack_handle_t __stack_depot_save(unsigned long *entries,
+					unsigned int nr_entries,
+					unsigned int extra_bits,
+					gfp_t gfp_flags, bool can_alloc);
+
 depot_stack_handle_t stack_depot_save(unsigned long *entries,
 				      unsigned int nr_entries, gfp_t gfp_flags);
 
 unsigned int stack_depot_fetch(depot_stack_handle_t handle,
 			       unsigned long **entries);
 
-unsigned int stack_depot_get_extra_bits(depot_stack_handle_t handle);
+void stack_depot_print(depot_stack_handle_t stack);
 
 int stack_depot_snprint(depot_stack_handle_t handle, char *buf, size_t size,
 		       int spaces);
 
-void stack_depot_print(depot_stack_handle_t stack);
+unsigned int stack_depot_get_extra_bits(depot_stack_handle_t handle);
 
 #endif
diff --git a/lib/stackdepot.c b/lib/stackdepot.c
index 0eed9bbcf23e..23d2a68a587b 100644
--- a/lib/stackdepot.c
+++ b/lib/stackdepot.c
@@ -79,85 +79,6 @@ static int next_slab_inited;
 static size_t depot_offset;
 static DEFINE_RAW_SPINLOCK(depot_lock);
 
-unsigned int stack_depot_get_extra_bits(depot_stack_handle_t handle)
-{
-	union handle_parts parts = { .handle = handle };
-
-	return parts.extra;
-}
-EXPORT_SYMBOL(stack_depot_get_extra_bits);
-
-static bool init_stack_slab(void **prealloc)
-{
-	if (!*prealloc)
-		return false;
-	/*
-	 * This smp_load_acquire() pairs with smp_store_release() to
-	 * |next_slab_inited| below and in depot_alloc_stack().
-	 */
-	if (smp_load_acquire(&next_slab_inited))
-		return true;
-	if (stack_slabs[depot_index] == NULL) {
-		stack_slabs[depot_index] = *prealloc;
-		*prealloc = NULL;
-	} else {
-		/* If this is the last depot slab, do not touch the next one. */
-		if (depot_index + 1 < STACK_ALLOC_MAX_SLABS) {
-			stack_slabs[depot_index + 1] = *prealloc;
-			*prealloc = NULL;
-			/*
-			 * This smp_store_release pairs with smp_load_acquire()
-			 * from |next_slab_inited| above and in
-			 * stack_depot_save().
-			 */
-			smp_store_release(&next_slab_inited, 1);
-		}
-	}
-	return true;
-}
-
-/* Allocation of a new stack in raw storage */
-static struct stack_record *
-depot_alloc_stack(unsigned long *entries, int size, u32 hash, void **prealloc)
-{
-	struct stack_record *stack;
-	size_t required_size = struct_size(stack, entries, size);
-
-	required_size = ALIGN(required_size, 1 << STACK_ALLOC_ALIGN);
-
-	if (unlikely(depot_offset + required_size > STACK_ALLOC_SIZE)) {
-		if (unlikely(depot_index + 1 >= STACK_ALLOC_MAX_SLABS)) {
-			WARN_ONCE(1, "Stack depot reached limit capacity");
-			return NULL;
-		}
-		depot_index++;
-		depot_offset = 0;
-		/*
-		 * smp_store_release() here pairs with smp_load_acquire() from
-		 * |next_slab_inited| in stack_depot_save() and
-		 * init_stack_slab().
-		 */
-		if (depot_index + 1 < STACK_ALLOC_MAX_SLABS)
-			smp_store_release(&next_slab_inited, 0);
-	}
-	init_stack_slab(prealloc);
-	if (stack_slabs[depot_index] == NULL)
-		return NULL;
-
-	stack = stack_slabs[depot_index] + depot_offset;
-
-	stack->hash = hash;
-	stack->size = size;
-	stack->handle.slabindex = depot_index;
-	stack->handle.offset = depot_offset >> STACK_ALLOC_ALIGN;
-	stack->handle.valid = 1;
-	stack->handle.extra = 0;
-	memcpy(stack->entries, entries, flex_array_size(stack, entries, size));
-	depot_offset += required_size;
-
-	return stack;
-}
-
 /* one hash table bucket entry per 16kB of memory */
 #define STACK_HASH_SCALE	14
 /* limited between 4k and 1M buckets */
@@ -271,6 +192,77 @@ int stack_depot_init(void)
 }
 EXPORT_SYMBOL_GPL(stack_depot_init);
 
+static bool init_stack_slab(void **prealloc)
+{
+	if (!*prealloc)
+		return false;
+	/*
+	 * This smp_load_acquire() pairs with smp_store_release() to
+	 * |next_slab_inited| below and in depot_alloc_stack().
+	 */
+	if (smp_load_acquire(&next_slab_inited))
+		return true;
+	if (stack_slabs[depot_index] == NULL) {
+		stack_slabs[depot_index] = *prealloc;
+		*prealloc = NULL;
+	} else {
+		/* If this is the last depot slab, do not touch the next one. */
+		if (depot_index + 1 < STACK_ALLOC_MAX_SLABS) {
+			stack_slabs[depot_index + 1] = *prealloc;
+			*prealloc = NULL;
+			/*
+			 * This smp_store_release pairs with smp_load_acquire()
+			 * from |next_slab_inited| above and in
+			 * stack_depot_save().
+			 */
+			smp_store_release(&next_slab_inited, 1);
+		}
+	}
+	return true;
+}
+
+/* Allocation of a new stack in raw storage */
+static struct stack_record *
+depot_alloc_stack(unsigned long *entries, int size, u32 hash, void **prealloc)
+{
+	struct stack_record *stack;
+	size_t required_size = struct_size(stack, entries, size);
+
+	required_size = ALIGN(required_size, 1 << STACK_ALLOC_ALIGN);
+
+	if (unlikely(depot_offset + required_size > STACK_ALLOC_SIZE)) {
+		if (unlikely(depot_index + 1 >= STACK_ALLOC_MAX_SLABS)) {
+			WARN_ONCE(1, "Stack depot reached limit capacity");
+			return NULL;
+		}
+		depot_index++;
+		depot_offset = 0;
+		/*
+		 * smp_store_release() here pairs with smp_load_acquire() from
+		 * |next_slab_inited| in stack_depot_save() and
+		 * init_stack_slab().
+		 */
+		if (depot_index + 1 < STACK_ALLOC_MAX_SLABS)
+			smp_store_release(&next_slab_inited, 0);
+	}
+	init_stack_slab(prealloc);
+	if (stack_slabs[depot_index] == NULL)
+		return NULL;
+
+	stack = stack_slabs[depot_index] + depot_offset;
+
+	stack->hash = hash;
+	stack->size = size;
+	stack->handle.slabindex = depot_index;
+	stack->handle.offset = depot_offset >> STACK_ALLOC_ALIGN;
+	stack->handle.valid = 1;
+	stack->handle.extra = 0;
+	memcpy(stack->entries, entries, flex_array_size(stack, entries, size));
+	depot_offset += required_size;
+
+	return stack;
+}
+
 /* Calculate hash for a stack */
 static inline u32 hash_stack(unsigned long *entries, unsigned int size)
 {
@@ -310,85 +302,6 @@ static inline struct stack_record *find_stack(struct stack_record *bucket,
 	return NULL;
 }
 
-/**
- * stack_depot_snprint - print stack entries from a depot into a buffer
- *
- * @handle:	Stack depot handle which was returned from
- *		stack_depot_save().
- * @buf:	Pointer to the print buffer
- *
- * @size:	Size of the print buffer
- *
- * @spaces:	Number of leading spaces to print
- *
- * Return:	Number of bytes printed.
- */
-int stack_depot_snprint(depot_stack_handle_t handle, char *buf, size_t size,
-		       int spaces)
-{
-	unsigned long *entries;
-	unsigned int nr_entries;
-
-	nr_entries = stack_depot_fetch(handle, &entries);
-	return nr_entries ? stack_trace_snprint(buf, size, entries, nr_entries,
-						spaces) : 0;
-}
-EXPORT_SYMBOL_GPL(stack_depot_snprint);
-
-/**
- * stack_depot_print - print stack entries from a depot
- *
- * @stack:		Stack depot handle which was returned from
- *			stack_depot_save().
- *
- */
-void stack_depot_print(depot_stack_handle_t stack)
-{
-	unsigned long *entries;
-	unsigned int nr_entries;
-
-	nr_entries = stack_depot_fetch(stack, &entries);
-	if (nr_entries > 0)
-		stack_trace_print(entries, nr_entries, 0);
-}
-EXPORT_SYMBOL_GPL(stack_depot_print);
-
-/**
- * stack_depot_fetch - Fetch stack entries from a depot
- *
- * @handle:		Stack depot handle which was returned from
- *			stack_depot_save().
- * @entries:		Pointer to store the entries address
- *
- * Return: The number of trace entries for this depot.
- */
-unsigned int stack_depot_fetch(depot_stack_handle_t handle,
-			       unsigned long **entries)
-{
-	union handle_parts parts = { .handle = handle };
-	void *slab;
-	size_t offset = parts.offset << STACK_ALLOC_ALIGN;
-	struct stack_record *stack;
-
-	*entries = NULL;
-	if (!handle)
-		return 0;
-
-	if (parts.slabindex > depot_index) {
-		WARN(1, "slab index %d out of bounds (%d) for stack id %08x\n",
-			parts.slabindex, depot_index, handle);
-		return 0;
-	}
-	slab = stack_slabs[parts.slabindex];
-	if (!slab)
-		return 0;
-	stack = slab + offset;
-
-	*entries = stack->entries;
-	return stack->size;
-}
-EXPORT_SYMBOL_GPL(stack_depot_fetch);
-
 /**
  * __stack_depot_save - Save a stack trace from an array
  *
@@ -534,3 +447,90 @@ depot_stack_handle_t stack_depot_save(unsigned long *entries,
 	return __stack_depot_save(entries, nr_entries, 0, alloc_flags, true);
 }
 EXPORT_SYMBOL_GPL(stack_depot_save);
+
+/**
+ * stack_depot_fetch - Fetch stack entries from a depot
+ *
+ * @handle:		Stack depot handle which was returned from
+ *			stack_depot_save().
+ * @entries:		Pointer to store the entries address
+ *
+ * Return: The number of trace entries for this depot.
+ */
+unsigned int stack_depot_fetch(depot_stack_handle_t handle,
+			       unsigned long **entries)
+{
+	union handle_parts parts = { .handle = handle };
+	void *slab;
+	size_t offset = parts.offset << STACK_ALLOC_ALIGN;
+	struct stack_record *stack;
+
+	*entries = NULL;
+	if (!handle)
+		return 0;
+
+	if (parts.slabindex > depot_index) {
+		WARN(1, "slab index %d out of bounds (%d) for stack id %08x\n",
+			parts.slabindex, depot_index, handle);
+		return 0;
+	}
+	slab = stack_slabs[parts.slabindex];
+	if (!slab)
+		return 0;
+	stack = slab + offset;
+
+	*entries = stack->entries;
+	return stack->size;
+}
+EXPORT_SYMBOL_GPL(stack_depot_fetch);
+
+/**
+ * stack_depot_print - print stack entries from a depot
+ *
+ * @stack:		Stack depot handle which was returned from
+ *			stack_depot_save().
+ *
+ */
+void stack_depot_print(depot_stack_handle_t stack)
+{
+	unsigned long *entries;
+	unsigned int nr_entries;
+
+	nr_entries = stack_depot_fetch(stack, &entries);
+	if (nr_entries > 0)
+		stack_trace_print(entries, nr_entries, 0);
+}
+EXPORT_SYMBOL_GPL(stack_depot_print);
+
+/**
+ * stack_depot_snprint - print stack entries from a depot into a buffer
+ *
+ * @handle:	Stack depot handle which was returned from
+ *		stack_depot_save().
+ * @buf:	Pointer to the print buffer
+ *
+ * @size:	Size of the print buffer
+ *
+ * @spaces:	Number of leading spaces to print
+ *
+ * Return:	Number of bytes printed.
+ */
+int stack_depot_snprint(depot_stack_handle_t handle, char *buf, size_t size,
+		       int spaces)
+{
+	unsigned long *entries;
+	unsigned int nr_entries;
+
+	nr_entries = stack_depot_fetch(handle, &entries);
+	return nr_entries ? stack_trace_snprint(buf, size, entries, nr_entries,
+						spaces) : 0;
+}
+EXPORT_SYMBOL_GPL(stack_depot_snprint);
+
+unsigned int stack_depot_get_extra_bits(depot_stack_handle_t handle)
+{
+	union handle_parts parts = { .handle = handle };
+
+	return parts.extra;
+}
+EXPORT_SYMBOL(stack_depot_get_extra_bits);
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [PATCH 03/18] lib/stackdepot: use pr_fmt to define message format
  2023-01-30 20:49 [PATCH 00/18] lib/stackdepot: fixes and clean-ups andrey.konovalov
  2023-01-30 20:49 ` [PATCH 01/18] lib/stackdepot: fix setting next_slab_inited in init_stack_slab andrey.konovalov
  2023-01-30 20:49 ` [PATCH 02/18] lib/stackdepot: put functions in logical order andrey.konovalov
@ 2023-01-30 20:49 ` andrey.konovalov
  2023-01-31 10:24   ` Alexander Potapenko
  2023-01-30 20:49 ` [PATCH 04/18] lib/stackdepot, mm: rename stack_depot_want_early_init andrey.konovalov
                   ` (14 subsequent siblings)
  17 siblings, 1 reply; 51+ messages in thread
From: andrey.konovalov @ 2023-01-30 20:49 UTC (permalink / raw)
  To: Marco Elver, Alexander Potapenko
  Cc: Andrey Konovalov, Vlastimil Babka, kasan-dev, Evgenii Stepanov,
	Andrew Morton, linux-mm, linux-kernel, Andrey Konovalov

From: Andrey Konovalov <andreyknvl@google.com>

Use pr_fmt to define the format for printing stack depot messages instead
of duplicating the "Stack Depot" prefix in each message.

Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
 lib/stackdepot.c | 10 ++++++----
 1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/lib/stackdepot.c b/lib/stackdepot.c
index 23d2a68a587b..90c4dd48d75e 100644
--- a/lib/stackdepot.c
+++ b/lib/stackdepot.c
@@ -19,6 +19,8 @@
  * Based on code by Dmitry Chernenkov.
  */
 
+#define pr_fmt(fmt) "stackdepot: " fmt
+
 #include <linux/gfp.h>
 #include <linux/jhash.h>
 #include <linux/kernel.h>
@@ -98,7 +100,7 @@ static int __init is_stack_depot_disabled(char *str)
 
 	ret = kstrtobool(str, &stack_depot_disable);
 	if (!ret && stack_depot_disable) {
-		pr_info("Stack Depot is disabled\n");
+		pr_info("disabled\n");
 		stack_table = NULL;
 	}
 	return 0;
@@ -142,7 +144,7 @@ int __init stack_depot_early_init(void)
 						1UL << STACK_HASH_ORDER_MAX);
 
 	if (!stack_table) {
-		pr_err("Stack Depot hash table allocation failed, disabling\n");
+		pr_err("hash table allocation failed, disabling\n");
 		stack_depot_disable = true;
 		return -ENOMEM;
 	}
@@ -177,11 +179,11 @@ int stack_depot_init(void)
 		if (entries > 1UL << STACK_HASH_ORDER_MAX)
 			entries = 1UL << STACK_HASH_ORDER_MAX;
 
-		pr_info("Stack Depot allocating hash table of %lu entries with kvcalloc\n",
+		pr_info("allocating hash table of %lu entries with kvcalloc\n",
 				entries);
 		stack_table = kvcalloc(entries, sizeof(struct stack_record *), GFP_KERNEL);
 		if (!stack_table) {
-			pr_err("Stack Depot hash table allocation failed, disabling\n");
+			pr_err("hash table allocation failed, disabling\n");
 			stack_depot_disable = true;
 			ret = -ENOMEM;
 		}
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [PATCH 04/18] lib/stackdepot, mm: rename stack_depot_want_early_init
  2023-01-30 20:49 [PATCH 00/18] lib/stackdepot: fixes and clean-ups andrey.konovalov
                   ` (2 preceding siblings ...)
  2023-01-30 20:49 ` [PATCH 03/18] lib/stackdepot: use pr_fmt to define message format andrey.konovalov
@ 2023-01-30 20:49 ` andrey.konovalov
  2023-01-31 10:26   ` Alexander Potapenko
  2023-02-08 16:40   ` Vlastimil Babka
  2023-01-30 20:49 ` [PATCH 05/18] lib/stackdepot: rename stack_depot_disable andrey.konovalov
                   ` (13 subsequent siblings)
  17 siblings, 2 replies; 51+ messages in thread
From: andrey.konovalov @ 2023-01-30 20:49 UTC (permalink / raw)
  To: Marco Elver, Alexander Potapenko
  Cc: Andrey Konovalov, Vlastimil Babka, kasan-dev, Evgenii Stepanov,
	Andrew Morton, linux-mm, linux-kernel, Andrey Konovalov

From: Andrey Konovalov <andreyknvl@google.com>

Rename stack_depot_want_early_init to stack_depot_request_early_init.

The old name is confusing, as it hints at returning some kind of intention
of stack depot. The new name reflects that this function requests an action
from stack depot instead.

No functional changes.

Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
 include/linux/stackdepot.h | 14 +++++++-------
 lib/stackdepot.c           | 10 +++++-----
 mm/page_owner.c            |  2 +-
 mm/slub.c                  |  4 ++--
 4 files changed, 15 insertions(+), 15 deletions(-)

diff --git a/include/linux/stackdepot.h b/include/linux/stackdepot.h
index 1296a6eeaec0..c4e3abc16b16 100644
--- a/include/linux/stackdepot.h
+++ b/include/linux/stackdepot.h
@@ -31,26 +31,26 @@ typedef u32 depot_stack_handle_t;
  * enabled as part of mm_init(), for subsystems where it's known at compile time
  * that stack depot will be used.
  *
- * Another alternative is to call stack_depot_want_early_init(), when the
+ * Another alternative is to call stack_depot_request_early_init(), when the
  * decision to use stack depot is taken e.g. when evaluating kernel boot
  * parameters, which precedes the enablement point in mm_init().
  *
- * stack_depot_init() and stack_depot_want_early_init() can be called regardless
- * of CONFIG_STACKDEPOT and are no-op when disabled. The actual save/fetch/print
- * functions should only be called from code that makes sure CONFIG_STACKDEPOT
- * is enabled.
+ * stack_depot_init() and stack_depot_request_early_init() can be called
+ * regardless of CONFIG_STACKDEPOT and are no-op when disabled. The actual
+ * save/fetch/print functions should only be called from code that makes sure
+ * CONFIG_STACKDEPOT is enabled.
  */
 #ifdef CONFIG_STACKDEPOT
 int stack_depot_init(void);
 
-void __init stack_depot_want_early_init(void);
+void __init stack_depot_request_early_init(void);
 
 /* This is supposed to be called only from mm_init() */
 int __init stack_depot_early_init(void);
 #else
 static inline int stack_depot_init(void) { return 0; }
 
-static inline void stack_depot_want_early_init(void) { }
+static inline void stack_depot_request_early_init(void) { }
 
 static inline int stack_depot_early_init(void)	{ return 0; }
 #endif
diff --git a/lib/stackdepot.c b/lib/stackdepot.c
index 90c4dd48d75e..8743fad1485f 100644
--- a/lib/stackdepot.c
+++ b/lib/stackdepot.c
@@ -71,7 +71,7 @@ struct stack_record {
 	unsigned long entries[];	/* Variable-sized array of entries. */
 };
 
-static bool __stack_depot_want_early_init __initdata = IS_ENABLED(CONFIG_STACKDEPOT_ALWAYS_INIT);
+static bool __stack_depot_early_init_requested __initdata = IS_ENABLED(CONFIG_STACKDEPOT_ALWAYS_INIT);
 static bool __stack_depot_early_init_passed __initdata;
 
 static void *stack_slabs[STACK_ALLOC_MAX_SLABS];
@@ -107,12 +107,12 @@ static int __init is_stack_depot_disabled(char *str)
 }
 early_param("stack_depot_disable", is_stack_depot_disabled);
 
-void __init stack_depot_want_early_init(void)
+void __init stack_depot_request_early_init(void)
 {
-	/* Too late to request early init now */
+	/* Too late to request early init now. */
 	WARN_ON(__stack_depot_early_init_passed);
 
-	__stack_depot_want_early_init = true;
+	__stack_depot_early_init_requested = true;
 }
 
 int __init stack_depot_early_init(void)
@@ -128,7 +128,7 @@ int __init stack_depot_early_init(void)
 	if (kasan_enabled() && !stack_hash_order)
 		stack_hash_order = STACK_HASH_ORDER_MAX;
 
-	if (!__stack_depot_want_early_init || stack_depot_disable)
+	if (!__stack_depot_early_init_requested || stack_depot_disable)
 		return 0;
 
 	if (stack_hash_order)
diff --git a/mm/page_owner.c b/mm/page_owner.c
index 2d27f532df4c..90a4a087e6c7 100644
--- a/mm/page_owner.c
+++ b/mm/page_owner.c
@@ -48,7 +48,7 @@ static int __init early_page_owner_param(char *buf)
 	int ret = kstrtobool(buf, &page_owner_enabled);
 
 	if (page_owner_enabled)
-		stack_depot_want_early_init();
+		stack_depot_request_early_init();
 
 	return ret;
 }
diff --git a/mm/slub.c b/mm/slub.c
index 13459c69095a..f2c6c356bc36 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1592,7 +1592,7 @@ static int __init setup_slub_debug(char *str)
 		} else {
 			slab_list_specified = true;
 			if (flags & SLAB_STORE_USER)
-				stack_depot_want_early_init();
+				stack_depot_request_early_init();
 		}
 	}
 
@@ -1611,7 +1611,7 @@ static int __init setup_slub_debug(char *str)
 out:
 	slub_debug = global_flags;
 	if (slub_debug & SLAB_STORE_USER)
-		stack_depot_want_early_init();
+		stack_depot_request_early_init();
 	if (slub_debug != 0 || slub_debug_string)
 		static_branch_enable(&slub_debug_enabled);
 	else
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [PATCH 05/18] lib/stackdepot: rename stack_depot_disable
  2023-01-30 20:49 [PATCH 00/18] lib/stackdepot: fixes and clean-ups andrey.konovalov
                   ` (3 preceding siblings ...)
  2023-01-30 20:49 ` [PATCH 04/18] lib/stackdepot, mm: rename stack_depot_want_early_init andrey.konovalov
@ 2023-01-30 20:49 ` andrey.konovalov
  2023-01-31 10:28   ` Alexander Potapenko
  2023-01-30 20:49 ` [PATCH 06/18] lib/stackdepot: annotate init and early init functions andrey.konovalov
                   ` (12 subsequent siblings)
  17 siblings, 1 reply; 51+ messages in thread
From: andrey.konovalov @ 2023-01-30 20:49 UTC (permalink / raw)
  To: Marco Elver, Alexander Potapenko
  Cc: Andrey Konovalov, Vlastimil Babka, kasan-dev, Evgenii Stepanov,
	Andrew Morton, linux-mm, linux-kernel, Andrey Konovalov

From: Andrey Konovalov <andreyknvl@google.com>

Rename stack_depot_disable to stack_depot_disabled to make its name look
similar to the names of other stack depot flags.

Also put stack_depot_disabled's definition together with the other flags.

Also rename is_stack_depot_disabled to disable_stack_depot: this name
looks more conventional for a function that processes a boot parameter.

No functional changes.

Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
 lib/stackdepot.c | 20 ++++++++++----------
 1 file changed, 10 insertions(+), 10 deletions(-)

diff --git a/lib/stackdepot.c b/lib/stackdepot.c
index 8743fad1485f..6e8aef12cf89 100644
--- a/lib/stackdepot.c
+++ b/lib/stackdepot.c
@@ -71,6 +71,7 @@ struct stack_record {
 	unsigned long entries[];	/* Variable-sized array of entries. */
 };
 
+static bool stack_depot_disabled;
 static bool __stack_depot_early_init_requested __initdata = IS_ENABLED(CONFIG_STACKDEPOT_ALWAYS_INIT);
 static bool __stack_depot_early_init_passed __initdata;
 
@@ -91,21 +92,20 @@ static DEFINE_RAW_SPINLOCK(depot_lock);
 static unsigned int stack_hash_order;
 static unsigned int stack_hash_mask;
 
-static bool stack_depot_disable;
 static struct stack_record **stack_table;
 
-static int __init is_stack_depot_disabled(char *str)
+static int __init disable_stack_depot(char *str)
 {
 	int ret;
 
-	ret = kstrtobool(str, &stack_depot_disable);
-	if (!ret && stack_depot_disable) {
+	ret = kstrtobool(str, &stack_depot_disabled);
+	if (!ret && stack_depot_disabled) {
 		pr_info("disabled\n");
 		stack_table = NULL;
 	}
 	return 0;
 }
-early_param("stack_depot_disable", is_stack_depot_disabled);
+early_param("stack_depot_disable", disable_stack_depot);
 
 void __init stack_depot_request_early_init(void)
 {
@@ -128,7 +128,7 @@ int __init stack_depot_early_init(void)
 	if (kasan_enabled() && !stack_hash_order)
 		stack_hash_order = STACK_HASH_ORDER_MAX;
 
-	if (!__stack_depot_early_init_requested || stack_depot_disable)
+	if (!__stack_depot_early_init_requested || stack_depot_disabled)
 		return 0;
 
 	if (stack_hash_order)
@@ -145,7 +145,7 @@ int __init stack_depot_early_init(void)
 
 	if (!stack_table) {
 		pr_err("hash table allocation failed, disabling\n");
-		stack_depot_disable = true;
+		stack_depot_disabled = true;
 		return -ENOMEM;
 	}
 
@@ -158,7 +158,7 @@ int stack_depot_init(void)
 	int ret = 0;
 
 	mutex_lock(&stack_depot_init_mutex);
-	if (!stack_depot_disable && !stack_table) {
+	if (!stack_depot_disabled && !stack_table) {
 		unsigned long entries;
 		int scale = STACK_HASH_SCALE;
 
@@ -184,7 +184,7 @@ int stack_depot_init(void)
 		stack_table = kvcalloc(entries, sizeof(struct stack_record *), GFP_KERNEL);
 		if (!stack_table) {
 			pr_err("hash table allocation failed, disabling\n");
-			stack_depot_disable = true;
+			stack_depot_disabled = true;
 			ret = -ENOMEM;
 		}
 		stack_hash_mask = entries - 1;
@@ -354,7 +354,7 @@ depot_stack_handle_t __stack_depot_save(unsigned long *entries,
 	 */
 	nr_entries = filter_irq_stacks(entries, nr_entries);
 
-	if (unlikely(nr_entries == 0) || stack_depot_disable)
+	if (unlikely(nr_entries == 0) || stack_depot_disabled)
 		goto fast_exit;
 
 	hash = hash_stack(entries, nr_entries);
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [PATCH 06/18] lib/stackdepot: annotate init and early init functions
  2023-01-30 20:49 [PATCH 00/18] lib/stackdepot: fixes and clean-ups andrey.konovalov
                   ` (4 preceding siblings ...)
  2023-01-30 20:49 ` [PATCH 05/18] lib/stackdepot: rename stack_depot_disable andrey.konovalov
@ 2023-01-30 20:49 ` andrey.konovalov
  2023-01-31 10:30   ` Alexander Potapenko
  2023-01-30 20:49 ` [PATCH 07/18] lib/stackdepot: lower the indentation in stack_depot_init andrey.konovalov
                   ` (11 subsequent siblings)
  17 siblings, 1 reply; 51+ messages in thread
From: andrey.konovalov @ 2023-01-30 20:49 UTC (permalink / raw)
  To: Marco Elver, Alexander Potapenko
  Cc: Andrey Konovalov, Vlastimil Babka, kasan-dev, Evgenii Stepanov,
	Andrew Morton, linux-mm, linux-kernel, Andrey Konovalov

From: Andrey Konovalov <andreyknvl@google.com>

Add comments to stack_depot_early_init and stack_depot_init to explain
certain parts of their implementation.

Also add a pr_info message to stack_depot_early_init similar to the one
in stack_depot_init.

Also move the scale variable in stack_depot_init to the scope where it
is being used.

Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
 lib/stackdepot.c | 27 +++++++++++++++++++++------
 1 file changed, 21 insertions(+), 6 deletions(-)

diff --git a/lib/stackdepot.c b/lib/stackdepot.c
index 6e8aef12cf89..b06f6a5caa83 100644
--- a/lib/stackdepot.c
+++ b/lib/stackdepot.c
@@ -115,24 +115,34 @@ void __init stack_depot_request_early_init(void)
 	__stack_depot_early_init_requested = true;
 }
 
+/* Allocates a hash table via memblock. Can only be used during early boot. */
 int __init stack_depot_early_init(void)
 {
 	unsigned long entries = 0;
 
-	/* This is supposed to be called only once, from mm_init() */
+	/* This function must be called only once, from mm_init(). */
 	if (WARN_ON(__stack_depot_early_init_passed))
 		return 0;
-
 	__stack_depot_early_init_passed = true;
 
+	/*
+	 * If KASAN is enabled, use the maximum order: KASAN is frequently used
+	 * in fuzzing scenarios, which leads to a large number of different
+	 * stack traces being stored in stack depot.
+	 */
 	if (kasan_enabled() && !stack_hash_order)
 		stack_hash_order = STACK_HASH_ORDER_MAX;
 
 	if (!__stack_depot_early_init_requested || stack_depot_disabled)
 		return 0;
 
+	/*
+	 * If stack_hash_order is not set, leave entries as 0 to rely on the
+	 * automatic calculations performed by alloc_large_system_hash.
+	 */
 	if (stack_hash_order)
-		entries = 1UL <<  stack_hash_order;
+		entries = 1UL << stack_hash_order;
+	pr_info("allocating hash table via alloc_large_system_hash\n");
 	stack_table = alloc_large_system_hash("stackdepot",
 						sizeof(struct stack_record *),
 						entries,
@@ -142,7 +152,6 @@ int __init stack_depot_early_init(void)
 						&stack_hash_mask,
 						1UL << STACK_HASH_ORDER_MIN,
 						1UL << STACK_HASH_ORDER_MAX);
-
 	if (!stack_table) {
 		pr_err("hash table allocation failed, disabling\n");
 		stack_depot_disabled = true;
@@ -152,6 +161,7 @@ int __init stack_depot_early_init(void)
 	return 0;
 }
 
+/* Allocates a hash table via kvmalloc. Can be used after boot. */
 int stack_depot_init(void)
 {
 	static DEFINE_MUTEX(stack_depot_init_mutex);
@@ -160,11 +170,16 @@ int stack_depot_init(void)
 	mutex_lock(&stack_depot_init_mutex);
 	if (!stack_depot_disabled && !stack_table) {
 		unsigned long entries;
-		int scale = STACK_HASH_SCALE;
 
+		/*
+		 * Similarly to stack_depot_early_init, use stack_hash_order
+		 * if assigned, and rely on automatic scaling otherwise.
+		 */
 		if (stack_hash_order) {
 			entries = 1UL << stack_hash_order;
 		} else {
+			int scale = STACK_HASH_SCALE;
+
 			entries = nr_free_buffer_pages();
 			entries = roundup_pow_of_two(entries);
 
@@ -179,7 +194,7 @@ int stack_depot_init(void)
 		if (entries > 1UL << STACK_HASH_ORDER_MAX)
 			entries = 1UL << STACK_HASH_ORDER_MAX;
 
-		pr_info("allocating hash table of %lu entries with kvcalloc\n",
+		pr_info("allocating hash table of %lu entries via kvcalloc\n",
 				entries);
 		stack_table = kvcalloc(entries, sizeof(struct stack_record *), GFP_KERNEL);
 		if (!stack_table) {
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [PATCH 07/18] lib/stackdepot: lower the indentation in stack_depot_init
  2023-01-30 20:49 [PATCH 00/18] lib/stackdepot: fixes and clean-ups andrey.konovalov
                   ` (5 preceding siblings ...)
  2023-01-30 20:49 ` [PATCH 06/18] lib/stackdepot: annotate init and early init functions andrey.konovalov
@ 2023-01-30 20:49 ` andrey.konovalov
  2023-01-31 10:37   ` Alexander Potapenko
  2023-01-30 20:49 ` [PATCH 08/18] lib/stackdepot: reorder and annotate global variables andrey.konovalov
                   ` (10 subsequent siblings)
  17 siblings, 1 reply; 51+ messages in thread
From: andrey.konovalov @ 2023-01-30 20:49 UTC (permalink / raw)
  To: Marco Elver, Alexander Potapenko
  Cc: Andrey Konovalov, Vlastimil Babka, kasan-dev, Evgenii Stepanov,
	Andrew Morton, linux-mm, linux-kernel, Andrey Konovalov

From: Andrey Konovalov <andreyknvl@google.com>

stack_depot_init does most things inside an if check. Move them out and
use a goto statement instead.

No functional changes.

Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
 lib/stackdepot.c | 70 +++++++++++++++++++++++++-----------------------
 1 file changed, 37 insertions(+), 33 deletions(-)

diff --git a/lib/stackdepot.c b/lib/stackdepot.c
index b06f6a5caa83..cb098bc99286 100644
--- a/lib/stackdepot.c
+++ b/lib/stackdepot.c
@@ -165,46 +165,50 @@ int __init stack_depot_early_init(void)
 int stack_depot_init(void)
 {
 	static DEFINE_MUTEX(stack_depot_init_mutex);
+	unsigned long entries;
 	int ret = 0;
 
 	mutex_lock(&stack_depot_init_mutex);
-	if (!stack_depot_disabled && !stack_table) {
-		unsigned long entries;
 
-		/*
-		 * Similarly to stack_depot_early_init, use stack_hash_order
-		 * if assigned, and rely on automatic scaling otherwise.
-		 */
-		if (stack_hash_order) {
-			entries = 1UL << stack_hash_order;
-		} else {
-			int scale = STACK_HASH_SCALE;
-
-			entries = nr_free_buffer_pages();
-			entries = roundup_pow_of_two(entries);
-
-			if (scale > PAGE_SHIFT)
-				entries >>= (scale - PAGE_SHIFT);
-			else
-				entries <<= (PAGE_SHIFT - scale);
-		}
+	if (stack_depot_disabled || stack_table)
+		goto out_unlock;
 
-		if (entries < 1UL << STACK_HASH_ORDER_MIN)
-			entries = 1UL << STACK_HASH_ORDER_MIN;
-		if (entries > 1UL << STACK_HASH_ORDER_MAX)
-			entries = 1UL << STACK_HASH_ORDER_MAX;
-
-		pr_info("allocating hash table of %lu entries via kvcalloc\n",
-				entries);
-		stack_table = kvcalloc(entries, sizeof(struct stack_record *), GFP_KERNEL);
-		if (!stack_table) {
-			pr_err("hash table allocation failed, disabling\n");
-			stack_depot_disabled = true;
-			ret = -ENOMEM;
-		}
-		stack_hash_mask = entries - 1;
+	/*
+	 * Similarly to stack_depot_early_init, use stack_hash_order
+	 * if assigned, and rely on automatic scaling otherwise.
+	 */
+	if (stack_hash_order) {
+		entries = 1UL << stack_hash_order;
+	} else {
+		int scale = STACK_HASH_SCALE;
+
+		entries = nr_free_buffer_pages();
+		entries = roundup_pow_of_two(entries);
+
+		if (scale > PAGE_SHIFT)
+			entries >>= (scale - PAGE_SHIFT);
+		else
+			entries <<= (PAGE_SHIFT - scale);
 	}
+
+	if (entries < 1UL << STACK_HASH_ORDER_MIN)
+		entries = 1UL << STACK_HASH_ORDER_MIN;
+	if (entries > 1UL << STACK_HASH_ORDER_MAX)
+		entries = 1UL << STACK_HASH_ORDER_MAX;
+
+	pr_info("allocating hash table of %lu entries via kvcalloc\n", entries);
+	stack_table = kvcalloc(entries, sizeof(struct stack_record *), GFP_KERNEL);
+	if (!stack_table) {
+		pr_err("hash table allocation failed, disabling\n");
+		stack_depot_disabled = true;
+		ret = -ENOMEM;
+		goto out_unlock;
+	}
+	stack_hash_mask = entries - 1;
+
+out_unlock:
 	mutex_unlock(&stack_depot_init_mutex);
+
 	return ret;
 }
 EXPORT_SYMBOL_GPL(stack_depot_init);
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [PATCH 08/18] lib/stackdepot: reorder and annotate global variables
  2023-01-30 20:49 [PATCH 00/18] lib/stackdepot: fixes and clean-ups andrey.konovalov
                   ` (6 preceding siblings ...)
  2023-01-30 20:49 ` [PATCH 07/18] lib/stackdepot: lower the indentation in stack_depot_init andrey.konovalov
@ 2023-01-30 20:49 ` andrey.konovalov
  2023-01-31 10:42   ` Alexander Potapenko
  2023-01-30 20:49 ` [PATCH 09/18] lib/stackdepot: rename hash table constants and variables andrey.konovalov
                   ` (9 subsequent siblings)
  17 siblings, 1 reply; 51+ messages in thread
From: andrey.konovalov @ 2023-01-30 20:49 UTC (permalink / raw)
  To: Marco Elver, Alexander Potapenko
  Cc: Andrey Konovalov, Vlastimil Babka, kasan-dev, Evgenii Stepanov,
	Andrew Morton, linux-mm, linux-kernel, Andrey Konovalov

From: Andrey Konovalov <andreyknvl@google.com>

Group stack depot global variables by their purpose:

1. Hash table-related variables,
2. Slab-related variables,

and add comments.

Also clean up comments for hash table-related constants.

Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
 lib/stackdepot.c | 27 +++++++++++++++++----------
 1 file changed, 17 insertions(+), 10 deletions(-)

diff --git a/lib/stackdepot.c b/lib/stackdepot.c
index cb098bc99286..89aee133303a 100644
--- a/lib/stackdepot.c
+++ b/lib/stackdepot.c
@@ -75,24 +75,31 @@ static bool stack_depot_disabled;
 static bool __stack_depot_early_init_requested __initdata = IS_ENABLED(CONFIG_STACKDEPOT_ALWAYS_INIT);
 static bool __stack_depot_early_init_passed __initdata;
 
-static void *stack_slabs[STACK_ALLOC_MAX_SLABS];
-
-static int depot_index;
-static int next_slab_inited;
-static size_t depot_offset;
-static DEFINE_RAW_SPINLOCK(depot_lock);
-
-/* one hash table bucket entry per 16kB of memory */
+/* Use one hash table bucket per 16 KB of memory. */
 #define STACK_HASH_SCALE	14
-/* limited between 4k and 1M buckets */
+/* Limit the number of buckets between 4K and 1M. */
 #define STACK_HASH_ORDER_MIN	12
 #define STACK_HASH_ORDER_MAX	20
+/* Initial seed for jhash2. */
 #define STACK_HASH_SEED 0x9747b28c
 
+/* Hash table of pointers to stored stack traces. */
+static struct stack_record **stack_table;
+/* Fixed order of the number of table buckets. Used when KASAN is enabled. */
 static unsigned int stack_hash_order;
+/* Hash mask for indexing the table. */
 static unsigned int stack_hash_mask;
 
-static struct stack_record **stack_table;
+/* Array of memory regions that store stack traces. */
+static void *stack_slabs[STACK_ALLOC_MAX_SLABS];
+/* Currently used slab in stack_slabs. */
+static int depot_index;
+/* Offset to the unused space in the currently used slab. */
+static size_t depot_offset;
+/* Lock that protects the variables above. */
+static DEFINE_RAW_SPINLOCK(depot_lock);
+/* Whether the next slab is initialized. */
+static int next_slab_inited;
 
 static int __init disable_stack_depot(char *str)
 {
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [PATCH 09/18] lib/stackdepot: rename hash table constants and variables
  2023-01-30 20:49 [PATCH 00/18] lib/stackdepot: fixes and clean-ups andrey.konovalov
                   ` (7 preceding siblings ...)
  2023-01-30 20:49 ` [PATCH 08/18] lib/stackdepot: reorder and annotate global variables andrey.konovalov
@ 2023-01-30 20:49 ` andrey.konovalov
  2023-01-31 11:33   ` Alexander Potapenko
  2023-01-30 20:49 ` [PATCH 10/18] lib/stackdepot: rename init_stack_slab andrey.konovalov
                   ` (8 subsequent siblings)
  17 siblings, 1 reply; 51+ messages in thread
From: andrey.konovalov @ 2023-01-30 20:49 UTC (permalink / raw)
  To: Marco Elver, Alexander Potapenko
  Cc: Andrey Konovalov, Vlastimil Babka, kasan-dev, Evgenii Stepanov,
	Andrew Morton, linux-mm, linux-kernel, Andrey Konovalov

From: Andrey Konovalov <andreyknvl@google.com>

Give more meaningful names to hash table-related constants and variables:

1. Rename STACK_HASH_SCALE to STACK_TABLE_SCALE to point out that it is
   related to scaling the hash table.

2. Rename STACK_HASH_ORDER_MIN/MAX to STACK_BUCKET_NUMBER_ORDER_MIN/MAX
   to point out that it is related to the number of hash table buckets.

3. Rename stack_hash_order to stack_bucket_number_order for the same
   reason as #2.

No functional changes.

Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
 lib/stackdepot.c | 42 +++++++++++++++++++++---------------------
 1 file changed, 21 insertions(+), 21 deletions(-)

diff --git a/lib/stackdepot.c b/lib/stackdepot.c
index 89aee133303a..cddcf029e307 100644
--- a/lib/stackdepot.c
+++ b/lib/stackdepot.c
@@ -76,17 +76,17 @@ static bool __stack_depot_early_init_requested __initdata = IS_ENABLED(CONFIG_ST
 static bool __stack_depot_early_init_passed __initdata;
 
 /* Use one hash table bucket per 16 KB of memory. */
-#define STACK_HASH_SCALE	14
+#define STACK_TABLE_SCALE 14
 /* Limit the number of buckets between 4K and 1M. */
-#define STACK_HASH_ORDER_MIN	12
-#define STACK_HASH_ORDER_MAX	20
+#define STACK_BUCKET_NUMBER_ORDER_MIN 12
+#define STACK_BUCKET_NUMBER_ORDER_MAX 20
 /* Initial seed for jhash2. */
 #define STACK_HASH_SEED 0x9747b28c
 
 /* Hash table of pointers to stored stack traces. */
 static struct stack_record **stack_table;
 /* Fixed order of the number of table buckets. Used when KASAN is enabled. */
-static unsigned int stack_hash_order;
+static unsigned int stack_bucket_number_order;
 /* Hash mask for indexing the table. */
 static unsigned int stack_hash_mask;
 
@@ -137,28 +137,28 @@ int __init stack_depot_early_init(void)
 	 * in fuzzing scenarios, which leads to a large number of different
 	 * stack traces being stored in stack depot.
 	 */
-	if (kasan_enabled() && !stack_hash_order)
-		stack_hash_order = STACK_HASH_ORDER_MAX;
+	if (kasan_enabled() && !stack_bucket_number_order)
+		stack_bucket_number_order = STACK_BUCKET_NUMBER_ORDER_MAX;
 
 	if (!__stack_depot_early_init_requested || stack_depot_disabled)
 		return 0;
 
 	/*
-	 * If stack_hash_order is not set, leave entries as 0 to rely on the
-	 * automatic calculations performed by alloc_large_system_hash.
+	 * If stack_bucket_number_order is not set, leave entries as 0 to rely
+	 * on the automatic calculations performed by alloc_large_system_hash.
 	 */
-	if (stack_hash_order)
-		entries = 1UL << stack_hash_order;
+	if (stack_bucket_number_order)
+		entries = 1UL << stack_bucket_number_order;
 	pr_info("allocating hash table via alloc_large_system_hash\n");
 	stack_table = alloc_large_system_hash("stackdepot",
 						sizeof(struct stack_record *),
 						entries,
-						STACK_HASH_SCALE,
+						STACK_TABLE_SCALE,
 						HASH_EARLY | HASH_ZERO,
 						NULL,
 						&stack_hash_mask,
-						1UL << STACK_HASH_ORDER_MIN,
-						1UL << STACK_HASH_ORDER_MAX);
+						1UL << STACK_BUCKET_NUMBER_ORDER_MIN,
+						1UL << STACK_BUCKET_NUMBER_ORDER_MAX);
 	if (!stack_table) {
 		pr_err("hash table allocation failed, disabling\n");
 		stack_depot_disabled = true;
@@ -181,13 +181,13 @@ int stack_depot_init(void)
 		goto out_unlock;
 
 	/*
-	 * Similarly to stack_depot_early_init, use stack_hash_order
+	 * Similarly to stack_depot_early_init, use stack_bucket_number_order
 	 * if assigned, and rely on automatic scaling otherwise.
 	 */
-	if (stack_hash_order) {
-		entries = 1UL << stack_hash_order;
+	if (stack_bucket_number_order) {
+		entries = 1UL << stack_bucket_number_order;
 	} else {
-		int scale = STACK_HASH_SCALE;
+		int scale = STACK_TABLE_SCALE;
 
 		entries = nr_free_buffer_pages();
 		entries = roundup_pow_of_two(entries);
@@ -198,10 +198,10 @@ int stack_depot_init(void)
 			entries <<= (PAGE_SHIFT - scale);
 	}
 
-	if (entries < 1UL << STACK_HASH_ORDER_MIN)
-		entries = 1UL << STACK_HASH_ORDER_MIN;
-	if (entries > 1UL << STACK_HASH_ORDER_MAX)
-		entries = 1UL << STACK_HASH_ORDER_MAX;
+	if (entries < 1UL << STACK_BUCKET_NUMBER_ORDER_MIN)
+		entries = 1UL << STACK_BUCKET_NUMBER_ORDER_MIN;
+	if (entries > 1UL << STACK_BUCKET_NUMBER_ORDER_MAX)
+		entries = 1UL << STACK_BUCKET_NUMBER_ORDER_MAX;
 
 	pr_info("allocating hash table of %lu entries via kvcalloc\n", entries);
 	stack_table = kvcalloc(entries, sizeof(struct stack_record *), GFP_KERNEL);
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [PATCH 10/18] lib/stackdepot: rename init_stack_slab
  2023-01-30 20:49 [PATCH 00/18] lib/stackdepot: fixes and clean-ups andrey.konovalov
                   ` (8 preceding siblings ...)
  2023-01-30 20:49 ` [PATCH 09/18] lib/stackdepot: rename hash table constants and variables andrey.konovalov
@ 2023-01-30 20:49 ` andrey.konovalov
  2023-01-31 11:34   ` Alexander Potapenko
  2023-01-30 20:49 ` [PATCH 11/18] lib/stackdepot: rename slab variables andrey.konovalov
                   ` (7 subsequent siblings)
  17 siblings, 1 reply; 51+ messages in thread
From: andrey.konovalov @ 2023-01-30 20:49 UTC (permalink / raw)
  To: Marco Elver, Alexander Potapenko
  Cc: Andrey Konovalov, Vlastimil Babka, kasan-dev, Evgenii Stepanov,
	Andrew Morton, linux-mm, linux-kernel, Andrey Konovalov

From: Andrey Konovalov <andreyknvl@google.com>

Rename init_stack_slab to depot_init_slab to align the name with
depot_alloc_stack.

No functional changes.

Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
 lib/stackdepot.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/lib/stackdepot.c b/lib/stackdepot.c
index cddcf029e307..69b9316b0d4b 100644
--- a/lib/stackdepot.c
+++ b/lib/stackdepot.c
@@ -220,7 +220,7 @@ int stack_depot_init(void)
 }
 EXPORT_SYMBOL_GPL(stack_depot_init);
 
-static bool init_stack_slab(void **prealloc)
+static bool depot_init_slab(void **prealloc)
 {
 	if (!*prealloc)
 		return false;
@@ -268,12 +268,12 @@ depot_alloc_stack(unsigned long *entries, int size, u32 hash, void **prealloc)
 		/*
 		 * smp_store_release() here pairs with smp_load_acquire() from
 		 * |next_slab_inited| in stack_depot_save() and
-		 * init_stack_slab().
+		 * depot_init_slab().
 		 */
 		if (depot_index + 1 < STACK_ALLOC_MAX_SLABS)
 			smp_store_release(&next_slab_inited, 0);
 	}
-	init_stack_slab(prealloc);
+	depot_init_slab(prealloc);
 	if (stack_slabs[depot_index] == NULL)
 		return NULL;
 
@@ -402,7 +402,7 @@ depot_stack_handle_t __stack_depot_save(unsigned long *entries,
 	 * lock.
 	 *
 	 * The smp_load_acquire() here pairs with smp_store_release() to
-	 * |next_slab_inited| in depot_alloc_stack() and init_stack_slab().
+	 * |next_slab_inited| in depot_alloc_stack() and depot_init_slab().
 	 */
 	if (unlikely(can_alloc && !smp_load_acquire(&next_slab_inited))) {
 		/*
@@ -438,7 +438,7 @@ depot_stack_handle_t __stack_depot_save(unsigned long *entries,
 		 * We didn't need to store this stack trace, but let's keep
 		 * the preallocated memory for the future.
 		 */
-		WARN_ON(!init_stack_slab(&prealloc));
+		WARN_ON(!depot_init_slab(&prealloc));
 	}
 
 	raw_spin_unlock_irqrestore(&depot_lock, flags);
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [PATCH 11/18] lib/stackdepot: rename slab variables
  2023-01-30 20:49 [PATCH 00/18] lib/stackdepot: fixes and clean-ups andrey.konovalov
                   ` (9 preceding siblings ...)
  2023-01-30 20:49 ` [PATCH 10/18] lib/stackdepot: rename init_stack_slab andrey.konovalov
@ 2023-01-30 20:49 ` andrey.konovalov
  2023-01-31 11:59   ` Alexander Potapenko
  2023-01-30 20:49 ` [PATCH 12/18] lib/stackdepot: rename handle and slab constants andrey.konovalov
                   ` (6 subsequent siblings)
  17 siblings, 1 reply; 51+ messages in thread
From: andrey.konovalov @ 2023-01-30 20:49 UTC (permalink / raw)
  To: Marco Elver, Alexander Potapenko
  Cc: Andrey Konovalov, Vlastimil Babka, kasan-dev, Evgenii Stepanov,
	Andrew Morton, linux-mm, linux-kernel, Andrey Konovalov

From: Andrey Konovalov <andreyknvl@google.com>

Give better names to slab-related global variables: change "depot_"
prefix to "slab_" to point out that these variables are related to
stack depot slabs.

Also rename the slabindex field in handle_parts to align its name with
the slab_index global variable.

No functional changes.

Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
 lib/stackdepot.c | 46 +++++++++++++++++++++++-----------------------
 1 file changed, 23 insertions(+), 23 deletions(-)

diff --git a/lib/stackdepot.c b/lib/stackdepot.c
index 69b9316b0d4b..023f299bedf6 100644
--- a/lib/stackdepot.c
+++ b/lib/stackdepot.c
@@ -56,7 +56,7 @@
 union handle_parts {
 	depot_stack_handle_t handle;
 	struct {
-		u32 slabindex : STACK_ALLOC_INDEX_BITS;
+		u32 slab_index : STACK_ALLOC_INDEX_BITS;
 		u32 offset : STACK_ALLOC_OFFSET_BITS;
 		u32 valid : STACK_ALLOC_NULL_PROTECTION_BITS;
 		u32 extra : STACK_DEPOT_EXTRA_BITS;
@@ -93,11 +93,11 @@ static unsigned int stack_hash_mask;
 /* Array of memory regions that store stack traces. */
 static void *stack_slabs[STACK_ALLOC_MAX_SLABS];
 /* Currently used slab in stack_slabs. */
-static int depot_index;
+static int slab_index;
 /* Offset to the unused space in the currently used slab. */
-static size_t depot_offset;
+static size_t slab_offset;
 /* Lock that protects the variables above. */
-static DEFINE_RAW_SPINLOCK(depot_lock);
+static DEFINE_RAW_SPINLOCK(slab_lock);
 /* Whether the next slab is initialized. */
 static int next_slab_inited;
 
@@ -230,13 +230,13 @@ static bool depot_init_slab(void **prealloc)
 	 */
 	if (smp_load_acquire(&next_slab_inited))
 		return true;
-	if (stack_slabs[depot_index] == NULL) {
-		stack_slabs[depot_index] = *prealloc;
+	if (stack_slabs[slab_index] == NULL) {
+		stack_slabs[slab_index] = *prealloc;
 		*prealloc = NULL;
 	} else {
 		/* If this is the last depot slab, do not touch the next one. */
-		if (depot_index + 1 < STACK_ALLOC_MAX_SLABS) {
-			stack_slabs[depot_index + 1] = *prealloc;
+		if (slab_index + 1 < STACK_ALLOC_MAX_SLABS) {
+			stack_slabs[slab_index + 1] = *prealloc;
 			*prealloc = NULL;
 			/*
 			 * This smp_store_release pairs with smp_load_acquire()
@@ -258,35 +258,35 @@ depot_alloc_stack(unsigned long *entries, int size, u32 hash, void **prealloc)
 
 	required_size = ALIGN(required_size, 1 << STACK_ALLOC_ALIGN);
 
-	if (unlikely(depot_offset + required_size > STACK_ALLOC_SIZE)) {
-		if (unlikely(depot_index + 1 >= STACK_ALLOC_MAX_SLABS)) {
+	if (unlikely(slab_offset + required_size > STACK_ALLOC_SIZE)) {
+		if (unlikely(slab_index + 1 >= STACK_ALLOC_MAX_SLABS)) {
 			WARN_ONCE(1, "Stack depot reached limit capacity");
 			return NULL;
 		}
-		depot_index++;
-		depot_offset = 0;
+		slab_index++;
+		slab_offset = 0;
 		/*
 		 * smp_store_release() here pairs with smp_load_acquire() from
 		 * |next_slab_inited| in stack_depot_save() and
 		 * depot_init_slab().
 		 */
-		if (depot_index + 1 < STACK_ALLOC_MAX_SLABS)
+		if (slab_index + 1 < STACK_ALLOC_MAX_SLABS)
 			smp_store_release(&next_slab_inited, 0);
 	}
 	depot_init_slab(prealloc);
-	if (stack_slabs[depot_index] == NULL)
+	if (stack_slabs[slab_index] == NULL)
 		return NULL;
 
-	stack = stack_slabs[depot_index] + depot_offset;
+	stack = stack_slabs[slab_index] + slab_offset;
 
 	stack->hash = hash;
 	stack->size = size;
-	stack->handle.slabindex = depot_index;
-	stack->handle.offset = depot_offset >> STACK_ALLOC_ALIGN;
+	stack->handle.slab_index = slab_index;
+	stack->handle.offset = slab_offset >> STACK_ALLOC_ALIGN;
 	stack->handle.valid = 1;
 	stack->handle.extra = 0;
 	memcpy(stack->entries, entries, flex_array_size(stack, entries, size));
-	depot_offset += required_size;
+	slab_offset += required_size;
 
 	return stack;
 }
@@ -418,7 +418,7 @@ depot_stack_handle_t __stack_depot_save(unsigned long *entries,
 			prealloc = page_address(page);
 	}
 
-	raw_spin_lock_irqsave(&depot_lock, flags);
+	raw_spin_lock_irqsave(&slab_lock, flags);
 
 	found = find_stack(*bucket, entries, nr_entries, hash);
 	if (!found) {
@@ -441,7 +441,7 @@ depot_stack_handle_t __stack_depot_save(unsigned long *entries,
 		WARN_ON(!depot_init_slab(&prealloc));
 	}
 
-	raw_spin_unlock_irqrestore(&depot_lock, flags);
+	raw_spin_unlock_irqrestore(&slab_lock, flags);
 exit:
 	if (prealloc) {
 		/* Nobody used this memory, ok to free it. */
@@ -497,12 +497,12 @@ unsigned int stack_depot_fetch(depot_stack_handle_t handle,
 	if (!handle)
 		return 0;
 
-	if (parts.slabindex > depot_index) {
+	if (parts.slab_index > slab_index) {
 		WARN(1, "slab index %d out of bounds (%d) for stack id %08x\n",
-			parts.slabindex, depot_index, handle);
+			parts.slab_index, slab_index, handle);
 		return 0;
 	}
-	slab = stack_slabs[parts.slabindex];
+	slab = stack_slabs[parts.slab_index];
 	if (!slab)
 		return 0;
 	stack = slab + offset;
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [PATCH 12/18] lib/stackdepot: rename handle and slab constants
  2023-01-30 20:49 [PATCH 00/18] lib/stackdepot: fixes and clean-ups andrey.konovalov
                   ` (10 preceding siblings ...)
  2023-01-30 20:49 ` [PATCH 11/18] lib/stackdepot: rename slab variables andrey.konovalov
@ 2023-01-30 20:49 ` andrey.konovalov
  2023-01-31 12:11   ` Alexander Potapenko
  2023-01-30 20:49 ` [PATCH 13/18] lib/stacktrace: drop impossible WARN_ON for depot_init_slab andrey.konovalov
                   ` (5 subsequent siblings)
  17 siblings, 1 reply; 51+ messages in thread
From: andrey.konovalov @ 2023-01-30 20:49 UTC (permalink / raw)
  To: Marco Elver, Alexander Potapenko
  Cc: Andrey Konovalov, Vlastimil Babka, kasan-dev, Evgenii Stepanov,
	Andrew Morton, linux-mm, linux-kernel, Andrey Konovalov

From: Andrey Konovalov <andreyknvl@google.com>

Change the "STACK_ALLOC_" prefix to "DEPOT_" for the constants that
define the number of bits in stack depot handles and the maximum number
of slabs.

The old prefix is unclear and makes wonder about how these constants
are related to stack allocations. The new prefix is also shorter.

Also simplify the comment for DEPOT_SLAB_ORDER.

No functional changes.

Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
 lib/stackdepot.c | 56 +++++++++++++++++++++++-------------------------
 1 file changed, 27 insertions(+), 29 deletions(-)

diff --git a/lib/stackdepot.c b/lib/stackdepot.c
index 023f299bedf6..b946ba74fea0 100644
--- a/lib/stackdepot.c
+++ b/lib/stackdepot.c
@@ -36,30 +36,28 @@
 #include <linux/memblock.h>
 #include <linux/kasan-enabled.h>
 
-#define DEPOT_STACK_BITS (sizeof(depot_stack_handle_t) * 8)
-
-#define STACK_ALLOC_NULL_PROTECTION_BITS 1
-#define STACK_ALLOC_ORDER 2 /* 'Slab' size order for stack depot, 4 pages */
-#define STACK_ALLOC_SIZE (1LL << (PAGE_SHIFT + STACK_ALLOC_ORDER))
-#define STACK_ALLOC_ALIGN 4
-#define STACK_ALLOC_OFFSET_BITS (STACK_ALLOC_ORDER + PAGE_SHIFT - \
-					STACK_ALLOC_ALIGN)
-#define STACK_ALLOC_INDEX_BITS (DEPOT_STACK_BITS - \
-		STACK_ALLOC_NULL_PROTECTION_BITS - \
-		STACK_ALLOC_OFFSET_BITS - STACK_DEPOT_EXTRA_BITS)
-#define STACK_ALLOC_SLABS_CAP 8192
-#define STACK_ALLOC_MAX_SLABS \
-	(((1LL << (STACK_ALLOC_INDEX_BITS)) < STACK_ALLOC_SLABS_CAP) ? \
-	 (1LL << (STACK_ALLOC_INDEX_BITS)) : STACK_ALLOC_SLABS_CAP)
+#define DEPOT_HANDLE_BITS (sizeof(depot_stack_handle_t) * 8)
+
+#define DEPOT_VALID_BITS 1
+#define DEPOT_SLAB_ORDER 2 /* Slab size order, 4 pages */
+#define DEPOT_SLAB_SIZE (1LL << (PAGE_SHIFT + DEPOT_SLAB_ORDER))
+#define DEPOT_STACK_ALIGN 4
+#define DEPOT_OFFSET_BITS (DEPOT_SLAB_ORDER + PAGE_SHIFT - DEPOT_STACK_ALIGN)
+#define DEPOT_SLAB_INDEX_BITS (DEPOT_HANDLE_BITS - DEPOT_VALID_BITS - \
+			       DEPOT_OFFSET_BITS - STACK_DEPOT_EXTRA_BITS)
+#define DEPOT_SLABS_CAP 8192
+#define DEPOT_MAX_SLABS \
+	(((1LL << (DEPOT_SLAB_INDEX_BITS)) < DEPOT_SLABS_CAP) ? \
+	 (1LL << (DEPOT_SLAB_INDEX_BITS)) : DEPOT_SLABS_CAP)
 
 /* The compact structure to store the reference to stacks. */
 union handle_parts {
 	depot_stack_handle_t handle;
 	struct {
-		u32 slab_index : STACK_ALLOC_INDEX_BITS;
-		u32 offset : STACK_ALLOC_OFFSET_BITS;
-		u32 valid : STACK_ALLOC_NULL_PROTECTION_BITS;
-		u32 extra : STACK_DEPOT_EXTRA_BITS;
+		u32 slab_index	: DEPOT_SLAB_INDEX_BITS;
+		u32 offset	: DEPOT_OFFSET_BITS;
+		u32 valid	: DEPOT_VALID_BITS;
+		u32 extra	: STACK_DEPOT_EXTRA_BITS;
 	};
 };
 
@@ -91,7 +89,7 @@ static unsigned int stack_bucket_number_order;
 static unsigned int stack_hash_mask;
 
 /* Array of memory regions that store stack traces. */
-static void *stack_slabs[STACK_ALLOC_MAX_SLABS];
+static void *stack_slabs[DEPOT_MAX_SLABS];
 /* Currently used slab in stack_slabs. */
 static int slab_index;
 /* Offset to the unused space in the currently used slab. */
@@ -235,7 +233,7 @@ static bool depot_init_slab(void **prealloc)
 		*prealloc = NULL;
 	} else {
 		/* If this is the last depot slab, do not touch the next one. */
-		if (slab_index + 1 < STACK_ALLOC_MAX_SLABS) {
+		if (slab_index + 1 < DEPOT_MAX_SLABS) {
 			stack_slabs[slab_index + 1] = *prealloc;
 			*prealloc = NULL;
 			/*
@@ -256,10 +254,10 @@ depot_alloc_stack(unsigned long *entries, int size, u32 hash, void **prealloc)
 	struct stack_record *stack;
 	size_t required_size = struct_size(stack, entries, size);
 
-	required_size = ALIGN(required_size, 1 << STACK_ALLOC_ALIGN);
+	required_size = ALIGN(required_size, 1 << DEPOT_STACK_ALIGN);
 
-	if (unlikely(slab_offset + required_size > STACK_ALLOC_SIZE)) {
-		if (unlikely(slab_index + 1 >= STACK_ALLOC_MAX_SLABS)) {
+	if (unlikely(slab_offset + required_size > DEPOT_SLAB_SIZE)) {
+		if (unlikely(slab_index + 1 >= DEPOT_MAX_SLABS)) {
 			WARN_ONCE(1, "Stack depot reached limit capacity");
 			return NULL;
 		}
@@ -270,7 +268,7 @@ depot_alloc_stack(unsigned long *entries, int size, u32 hash, void **prealloc)
 		 * |next_slab_inited| in stack_depot_save() and
 		 * depot_init_slab().
 		 */
-		if (slab_index + 1 < STACK_ALLOC_MAX_SLABS)
+		if (slab_index + 1 < DEPOT_MAX_SLABS)
 			smp_store_release(&next_slab_inited, 0);
 	}
 	depot_init_slab(prealloc);
@@ -282,7 +280,7 @@ depot_alloc_stack(unsigned long *entries, int size, u32 hash, void **prealloc)
 	stack->hash = hash;
 	stack->size = size;
 	stack->handle.slab_index = slab_index;
-	stack->handle.offset = slab_offset >> STACK_ALLOC_ALIGN;
+	stack->handle.offset = slab_offset >> DEPOT_STACK_ALIGN;
 	stack->handle.valid = 1;
 	stack->handle.extra = 0;
 	memcpy(stack->entries, entries, flex_array_size(stack, entries, size));
@@ -413,7 +411,7 @@ depot_stack_handle_t __stack_depot_save(unsigned long *entries,
 		alloc_flags &= ~GFP_ZONEMASK;
 		alloc_flags &= (GFP_ATOMIC | GFP_KERNEL);
 		alloc_flags |= __GFP_NOWARN;
-		page = alloc_pages(alloc_flags, STACK_ALLOC_ORDER);
+		page = alloc_pages(alloc_flags, DEPOT_SLAB_ORDER);
 		if (page)
 			prealloc = page_address(page);
 	}
@@ -445,7 +443,7 @@ depot_stack_handle_t __stack_depot_save(unsigned long *entries,
 exit:
 	if (prealloc) {
 		/* Nobody used this memory, ok to free it. */
-		free_pages((unsigned long)prealloc, STACK_ALLOC_ORDER);
+		free_pages((unsigned long)prealloc, DEPOT_SLAB_ORDER);
 	}
 	if (found)
 		retval.handle = found->handle.handle;
@@ -490,7 +488,7 @@ unsigned int stack_depot_fetch(depot_stack_handle_t handle,
 {
 	union handle_parts parts = { .handle = handle };
 	void *slab;
-	size_t offset = parts.offset << STACK_ALLOC_ALIGN;
+	size_t offset = parts.offset << DEPOT_STACK_ALIGN;
 	struct stack_record *stack;
 
 	*entries = NULL;
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [PATCH 13/18] lib/stacktrace: drop impossible WARN_ON for depot_init_slab
  2023-01-30 20:49 [PATCH 00/18] lib/stackdepot: fixes and clean-ups andrey.konovalov
                   ` (11 preceding siblings ...)
  2023-01-30 20:49 ` [PATCH 12/18] lib/stackdepot: rename handle and slab constants andrey.konovalov
@ 2023-01-30 20:49 ` andrey.konovalov
  2023-01-30 20:49 ` [PATCH 14/18] lib/stackdepot: annotate depot_init_slab and depot_alloc_stack andrey.konovalov
                   ` (4 subsequent siblings)
  17 siblings, 0 replies; 51+ messages in thread
From: andrey.konovalov @ 2023-01-30 20:49 UTC (permalink / raw)
  To: Marco Elver, Alexander Potapenko
  Cc: Andrey Konovalov, Vlastimil Babka, kasan-dev, Evgenii Stepanov,
	Andrew Morton, linux-mm, linux-kernel, Andrey Konovalov

From: Andrey Konovalov <andreyknvl@google.com>

depot_init_slab has two call sites:

1. In depot_alloc_stack with a potentially NULL prealloc.
2. In __stack_depot_save with a non-NULL prealloc.

At the same time depot_init_slab can only return false when prealloc is
NULL.

As the second call site makes sure that prealloc is not NULL, the WARN_ON
there can never trigger. Thus, drop the WARN_ON and also move the prealloc
check from depot_init_slab to its first call site.

Also change the return type of depot_init_slab to void as it now always
returns true.

Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
 lib/stackdepot.c | 12 +++++-------
 1 file changed, 5 insertions(+), 7 deletions(-)

diff --git a/lib/stackdepot.c b/lib/stackdepot.c
index b946ba74fea0..d6be82a5c223 100644
--- a/lib/stackdepot.c
+++ b/lib/stackdepot.c
@@ -218,16 +218,14 @@ int stack_depot_init(void)
 }
 EXPORT_SYMBOL_GPL(stack_depot_init);
 
-static bool depot_init_slab(void **prealloc)
+static void depot_init_slab(void **prealloc)
 {
-	if (!*prealloc)
-		return false;
 	/*
 	 * This smp_load_acquire() pairs with smp_store_release() to
 	 * |next_slab_inited| below and in depot_alloc_stack().
 	 */
 	if (smp_load_acquire(&next_slab_inited))
-		return true;
+		return;
 	if (stack_slabs[slab_index] == NULL) {
 		stack_slabs[slab_index] = *prealloc;
 		*prealloc = NULL;
@@ -244,7 +242,6 @@ static bool depot_init_slab(void **prealloc)
 			smp_store_release(&next_slab_inited, 1);
 		}
 	}
-	return true;
 }
 
 /* Allocation of a new stack in raw storage */
@@ -271,7 +268,8 @@ depot_alloc_stack(unsigned long *entries, int size, u32 hash, void **prealloc)
 		if (slab_index + 1 < DEPOT_MAX_SLABS)
 			smp_store_release(&next_slab_inited, 0);
 	}
-	depot_init_slab(prealloc);
+	if (*prealloc)
+		depot_init_slab(prealloc);
 	if (stack_slabs[slab_index] == NULL)
 		return NULL;
 
@@ -436,7 +434,7 @@ depot_stack_handle_t __stack_depot_save(unsigned long *entries,
 		 * We didn't need to store this stack trace, but let's keep
 		 * the preallocated memory for the future.
 		 */
-		WARN_ON(!depot_init_slab(&prealloc));
+		depot_init_slab(&prealloc);
 	}
 
 	raw_spin_unlock_irqrestore(&slab_lock, flags);
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [PATCH 14/18] lib/stackdepot: annotate depot_init_slab and depot_alloc_stack
  2023-01-30 20:49 [PATCH 00/18] lib/stackdepot: fixes and clean-ups andrey.konovalov
                   ` (12 preceding siblings ...)
  2023-01-30 20:49 ` [PATCH 13/18] lib/stacktrace: drop impossible WARN_ON for depot_init_slab andrey.konovalov
@ 2023-01-30 20:49 ` andrey.konovalov
  2023-01-30 20:49 ` [PATCH 15/18] lib/stacktrace, kasan, kmsan: rework extra_bits interface andrey.konovalov
                   ` (3 subsequent siblings)
  17 siblings, 0 replies; 51+ messages in thread
From: andrey.konovalov @ 2023-01-30 20:49 UTC (permalink / raw)
  To: Marco Elver, Alexander Potapenko
  Cc: Andrey Konovalov, Vlastimil Babka, kasan-dev, Evgenii Stepanov,
	Andrew Morton, linux-mm, linux-kernel, Andrey Konovalov

From: Andrey Konovalov <andreyknvl@google.com>

Clean up the exisiting comments and add new ones to depot_init_slab and
depot_alloc_stack.

As a part of the clean-up, remove mentions of which variable is accessed
by smp_store_release and smp_load_acquire: it is clear as is from the
code.

Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
 lib/stackdepot.c | 35 +++++++++++++++++++++++++----------
 1 file changed, 25 insertions(+), 10 deletions(-)

diff --git a/lib/stackdepot.c b/lib/stackdepot.c
index d6be82a5c223..7282565722f2 100644
--- a/lib/stackdepot.c
+++ b/lib/stackdepot.c
@@ -218,33 +218,41 @@ int stack_depot_init(void)
 }
 EXPORT_SYMBOL_GPL(stack_depot_init);
 
+/* Uses preallocated memory to initialize a new stack depot slab. */
 static void depot_init_slab(void **prealloc)
 {
 	/*
-	 * This smp_load_acquire() pairs with smp_store_release() to
-	 * |next_slab_inited| below and in depot_alloc_stack().
+	 * If the next slab is already initialized, do not use the
+	 * preallocated memory.
+	 * smp_load_acquire() here pairs with smp_store_release() below and
+	 * in depot_alloc_stack().
 	 */
 	if (smp_load_acquire(&next_slab_inited))
 		return;
+
+	/* Check if the current slab is not yet allocated. */
 	if (stack_slabs[slab_index] == NULL) {
+		/* Use the preallocated memory for the current slab. */
 		stack_slabs[slab_index] = *prealloc;
 		*prealloc = NULL;
 	} else {
-		/* If this is the last depot slab, do not touch the next one. */
+		/*
+		 * Otherwise, use the preallocated memory for the next slab
+		 * as long as we do not exceed the maximum number of slabs.
+		 */
 		if (slab_index + 1 < DEPOT_MAX_SLABS) {
 			stack_slabs[slab_index + 1] = *prealloc;
 			*prealloc = NULL;
 			/*
 			 * This smp_store_release pairs with smp_load_acquire()
-			 * from |next_slab_inited| above and in
-			 * stack_depot_save().
+			 * above and in stack_depot_save().
 			 */
 			smp_store_release(&next_slab_inited, 1);
 		}
 	}
 }
 
-/* Allocation of a new stack in raw storage */
+/* Allocates a new stack in a stack depot slab. */
 static struct stack_record *
 depot_alloc_stack(unsigned long *entries, int size, u32 hash, void **prealloc)
 {
@@ -253,28 +261,35 @@ depot_alloc_stack(unsigned long *entries, int size, u32 hash, void **prealloc)
 
 	required_size = ALIGN(required_size, 1 << DEPOT_STACK_ALIGN);
 
+	/* Check if there is not enough space in the current slab. */
 	if (unlikely(slab_offset + required_size > DEPOT_SLAB_SIZE)) {
+		/* Bail out if we reached the slab limit. */
 		if (unlikely(slab_index + 1 >= DEPOT_MAX_SLABS)) {
 			WARN_ONCE(1, "Stack depot reached limit capacity");
 			return NULL;
 		}
+
+		/* Move on to the next slab. */
 		slab_index++;
 		slab_offset = 0;
 		/*
-		 * smp_store_release() here pairs with smp_load_acquire() from
-		 * |next_slab_inited| in stack_depot_save() and
-		 * depot_init_slab().
+		 * smp_store_release() here pairs with smp_load_acquire() in
+		 * stack_depot_save() and depot_init_slab().
 		 */
 		if (slab_index + 1 < DEPOT_MAX_SLABS)
 			smp_store_release(&next_slab_inited, 0);
 	}
+
+	/* Assign the preallocated memory to a slab if required. */
 	if (*prealloc)
 		depot_init_slab(prealloc);
+
+	/* Check if we have a slab to save the stack trace. */
 	if (stack_slabs[slab_index] == NULL)
 		return NULL;
 
+	/* Save the stack trace. */
 	stack = stack_slabs[slab_index] + slab_offset;
-
 	stack->hash = hash;
 	stack->size = size;
 	stack->handle.slab_index = slab_index;
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [PATCH 15/18] lib/stacktrace, kasan, kmsan: rework extra_bits interface
  2023-01-30 20:49 [PATCH 00/18] lib/stackdepot: fixes and clean-ups andrey.konovalov
                   ` (13 preceding siblings ...)
  2023-01-30 20:49 ` [PATCH 14/18] lib/stackdepot: annotate depot_init_slab and depot_alloc_stack andrey.konovalov
@ 2023-01-30 20:49 ` andrey.konovalov
  2023-01-31  8:53   ` Marco Elver
  2023-02-02 10:03   ` Alexander Potapenko
  2023-01-30 20:49 ` [PATCH 16/18] lib/stackdepot: annotate racy slab_index accesses andrey.konovalov
                   ` (2 subsequent siblings)
  17 siblings, 2 replies; 51+ messages in thread
From: andrey.konovalov @ 2023-01-30 20:49 UTC (permalink / raw)
  To: Marco Elver, Alexander Potapenko
  Cc: Andrey Konovalov, Vlastimil Babka, kasan-dev, Evgenii Stepanov,
	Andrew Morton, linux-mm, linux-kernel, Andrey Konovalov

From: Andrey Konovalov <andreyknvl@google.com>

The current implementation of the extra_bits interface is confusing:
passing extra_bits to __stack_depot_save makes it seem that the extra
bits are somehow stored in stack depot. In reality, they are only
embedded into a stack depot handle and are not used within stack depot.

Drop the extra_bits argument from __stack_depot_save and instead provide
a new stack_depot_set_extra_bits function (similar to the exsiting
stack_depot_get_extra_bits) that saves extra bits into a stack depot
handle.

Update the callers of __stack_depot_save to use the new interace.

This change also fixes a minor issue in the old code: __stack_depot_save
does not return NULL if saving stack trace fails and extra_bits is used.

Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
 include/linux/stackdepot.h |  4 +++-
 lib/stackdepot.c           | 38 +++++++++++++++++++++++++++++---------
 mm/kasan/common.c          |  2 +-
 mm/kmsan/core.c            | 10 +++++++---
 4 files changed, 40 insertions(+), 14 deletions(-)

diff --git a/include/linux/stackdepot.h b/include/linux/stackdepot.h
index c4e3abc16b16..f999811c66d7 100644
--- a/include/linux/stackdepot.h
+++ b/include/linux/stackdepot.h
@@ -57,7 +57,6 @@ static inline int stack_depot_early_init(void)	{ return 0; }
 
 depot_stack_handle_t __stack_depot_save(unsigned long *entries,
 					unsigned int nr_entries,
-					unsigned int extra_bits,
 					gfp_t gfp_flags, bool can_alloc);
 
 depot_stack_handle_t stack_depot_save(unsigned long *entries,
@@ -71,6 +70,9 @@ void stack_depot_print(depot_stack_handle_t stack);
 int stack_depot_snprint(depot_stack_handle_t handle, char *buf, size_t size,
 		       int spaces);
 
+depot_stack_handle_t stack_depot_set_extra_bits(depot_stack_handle_t handle,
+						unsigned int extra_bits);
+
 unsigned int stack_depot_get_extra_bits(depot_stack_handle_t handle);
 
 #endif
diff --git a/lib/stackdepot.c b/lib/stackdepot.c
index 7282565722f2..f291ad6a4e72 100644
--- a/lib/stackdepot.c
+++ b/lib/stackdepot.c
@@ -346,7 +346,6 @@ static inline struct stack_record *find_stack(struct stack_record *bucket,
  *
  * @entries:		Pointer to storage array
  * @nr_entries:		Size of the storage array
- * @extra_bits:		Flags to store in unused bits of depot_stack_handle_t
  * @alloc_flags:	Allocation gfp flags
  * @can_alloc:		Allocate stack slabs (increased chance of failure if false)
  *
@@ -358,10 +357,6 @@ static inline struct stack_record *find_stack(struct stack_record *bucket,
  * If the stack trace in @entries is from an interrupt, only the portion up to
  * interrupt entry is saved.
  *
- * Additional opaque flags can be passed in @extra_bits, stored in the unused
- * bits of the stack handle, and retrieved using stack_depot_get_extra_bits()
- * without calling stack_depot_fetch().
- *
  * Context: Any context, but setting @can_alloc to %false is required if
  *          alloc_pages() cannot be used from the current context. Currently
  *          this is the case from contexts where neither %GFP_ATOMIC nor
@@ -371,7 +366,6 @@ static inline struct stack_record *find_stack(struct stack_record *bucket,
  */
 depot_stack_handle_t __stack_depot_save(unsigned long *entries,
 					unsigned int nr_entries,
-					unsigned int extra_bits,
 					gfp_t alloc_flags, bool can_alloc)
 {
 	struct stack_record *found = NULL, **bucket;
@@ -461,8 +455,6 @@ depot_stack_handle_t __stack_depot_save(unsigned long *entries,
 	if (found)
 		retval.handle = found->handle.handle;
 fast_exit:
-	retval.extra = extra_bits;
-
 	return retval.handle;
 }
 EXPORT_SYMBOL_GPL(__stack_depot_save);
@@ -483,7 +475,7 @@ depot_stack_handle_t stack_depot_save(unsigned long *entries,
 				      unsigned int nr_entries,
 				      gfp_t alloc_flags)
 {
-	return __stack_depot_save(entries, nr_entries, 0, alloc_flags, true);
+	return __stack_depot_save(entries, nr_entries, alloc_flags, true);
 }
 EXPORT_SYMBOL_GPL(stack_depot_save);
 
@@ -566,6 +558,34 @@ int stack_depot_snprint(depot_stack_handle_t handle, char *buf, size_t size,
 }
 EXPORT_SYMBOL_GPL(stack_depot_snprint);
 
+/**
+ * stack_depot_set_extra_bits - Set extra bits in a stack depot handle
+ *
+ * @handle:	Stack depot handle
+ * @extra_bits:	Value to set the extra bits
+ *
+ * Return: Stack depot handle with extra bits set
+ *
+ * Stack depot handles have a few unused bits, which can be used for storing
+ * user-specific information. These bits are transparent to the stack depot.
+ */
+depot_stack_handle_t stack_depot_set_extra_bits(depot_stack_handle_t handle,
+						unsigned int extra_bits)
+{
+	union handle_parts parts = { .handle = handle };
+
+	parts.extra = extra_bits;
+	return parts.handle;
+}
+EXPORT_SYMBOL(stack_depot_set_extra_bits);
+
+/**
+ * stack_depot_get_extra_bits - Retrieve extra bits from a stack depot handle
+ *
+ * @handle:	Stack depot handle with extra bits saved
+ *
+ * Return: Extra bits retrieved from the stack depot handle
+ */
 unsigned int stack_depot_get_extra_bits(depot_stack_handle_t handle)
 {
 	union handle_parts parts = { .handle = handle };
diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index 833bf2cfd2a3..50f4338b477f 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -43,7 +43,7 @@ depot_stack_handle_t kasan_save_stack(gfp_t flags, bool can_alloc)
 	unsigned int nr_entries;
 
 	nr_entries = stack_trace_save(entries, ARRAY_SIZE(entries), 0);
-	return __stack_depot_save(entries, nr_entries, 0, flags, can_alloc);
+	return __stack_depot_save(entries, nr_entries, flags, can_alloc);
 }
 
 void kasan_set_track(struct kasan_track *track, gfp_t flags)
diff --git a/mm/kmsan/core.c b/mm/kmsan/core.c
index 112dce135c7f..f710257d6867 100644
--- a/mm/kmsan/core.c
+++ b/mm/kmsan/core.c
@@ -69,13 +69,15 @@ depot_stack_handle_t kmsan_save_stack_with_flags(gfp_t flags,
 {
 	unsigned long entries[KMSAN_STACK_DEPTH];
 	unsigned int nr_entries;
+	depot_stack_handle_t handle;
 
 	nr_entries = stack_trace_save(entries, KMSAN_STACK_DEPTH, 0);
 
 	/* Don't sleep (see might_sleep_if() in __alloc_pages_nodemask()). */
 	flags &= ~__GFP_DIRECT_RECLAIM;
 
-	return __stack_depot_save(entries, nr_entries, extra, flags, true);
+	handle = __stack_depot_save(entries, nr_entries, flags, true);
+	return stack_depot_set_extra_bits(handle, extra);
 }
 
 /* Copy the metadata following the memmove() behavior. */
@@ -215,6 +217,7 @@ depot_stack_handle_t kmsan_internal_chain_origin(depot_stack_handle_t id)
 	u32 extra_bits;
 	int depth;
 	bool uaf;
+	depot_stack_handle_t handle;
 
 	if (!id)
 		return id;
@@ -250,8 +253,9 @@ depot_stack_handle_t kmsan_internal_chain_origin(depot_stack_handle_t id)
 	 * positives when __stack_depot_save() passes it to instrumented code.
 	 */
 	kmsan_internal_unpoison_memory(entries, sizeof(entries), false);
-	return __stack_depot_save(entries, ARRAY_SIZE(entries), extra_bits,
-				  GFP_ATOMIC, true);
+	handle = __stack_depot_save(entries, ARRAY_SIZE(entries), GFP_ATOMIC,
+				    true);
+	return stack_depot_set_extra_bits(handle, extra_bits);
 }
 
 void kmsan_internal_set_shadow_origin(void *addr, size_t size, int b,
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [PATCH 16/18] lib/stackdepot: annotate racy slab_index accesses
  2023-01-30 20:49 [PATCH 00/18] lib/stackdepot: fixes and clean-ups andrey.konovalov
                   ` (14 preceding siblings ...)
  2023-01-30 20:49 ` [PATCH 15/18] lib/stacktrace, kasan, kmsan: rework extra_bits interface andrey.konovalov
@ 2023-01-30 20:49 ` andrey.konovalov
  2023-01-31  8:40   ` Marco Elver
  2023-01-30 20:49 ` [PATCH 17/18] lib/stackdepot: various comments clean-ups andrey.konovalov
  2023-01-30 20:49 ` [PATCH 18/18] lib/stackdepot: move documentation comments to stackdepot.h andrey.konovalov
  17 siblings, 1 reply; 51+ messages in thread
From: andrey.konovalov @ 2023-01-30 20:49 UTC (permalink / raw)
  To: Marco Elver, Alexander Potapenko
  Cc: Andrey Konovalov, Vlastimil Babka, kasan-dev, Evgenii Stepanov,
	Andrew Morton, linux-mm, linux-kernel, Andrey Konovalov

From: Andrey Konovalov <andreyknvl@google.com>

Accesses to slab_index are protected by slab_lock everywhere except
in a sanity check in stack_depot_fetch. The read access there can race
with the write access in depot_alloc_stack.

Use WRITE/READ_ONCE() to annotate the racy accesses.

As the sanity check is only used to print a warning in case of a
violation of the stack depot interface usage, it does not make a lot
of sense to use proper synchronization.

Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
 lib/stackdepot.c | 13 +++++++++----
 1 file changed, 9 insertions(+), 4 deletions(-)

diff --git a/lib/stackdepot.c b/lib/stackdepot.c
index f291ad6a4e72..cc2fe8563af4 100644
--- a/lib/stackdepot.c
+++ b/lib/stackdepot.c
@@ -269,8 +269,11 @@ depot_alloc_stack(unsigned long *entries, int size, u32 hash, void **prealloc)
 			return NULL;
 		}
 
-		/* Move on to the next slab. */
-		slab_index++;
+		/*
+		 * Move on to the next slab.
+		 * WRITE_ONCE annotates a race with stack_depot_fetch.
+		 */
+		WRITE_ONCE(slab_index, slab_index + 1);
 		slab_offset = 0;
 		/*
 		 * smp_store_release() here pairs with smp_load_acquire() in
@@ -492,6 +495,8 @@ unsigned int stack_depot_fetch(depot_stack_handle_t handle,
 			       unsigned long **entries)
 {
 	union handle_parts parts = { .handle = handle };
+	/* READ_ONCE annotates a race with depot_alloc_stack. */
+	int slab_index_cached = READ_ONCE(slab_index);
 	void *slab;
 	size_t offset = parts.offset << DEPOT_STACK_ALIGN;
 	struct stack_record *stack;
@@ -500,9 +505,9 @@ unsigned int stack_depot_fetch(depot_stack_handle_t handle,
 	if (!handle)
 		return 0;
 
-	if (parts.slab_index > slab_index) {
+	if (parts.slab_index > slab_index_cached) {
 		WARN(1, "slab index %d out of bounds (%d) for stack id %08x\n",
-			parts.slab_index, slab_index, handle);
+			parts.slab_index, slab_index_cached, handle);
 		return 0;
 	}
 	slab = stack_slabs[parts.slab_index];
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [PATCH 17/18] lib/stackdepot: various comments clean-ups
  2023-01-30 20:49 [PATCH 00/18] lib/stackdepot: fixes and clean-ups andrey.konovalov
                   ` (15 preceding siblings ...)
  2023-01-30 20:49 ` [PATCH 16/18] lib/stackdepot: annotate racy slab_index accesses andrey.konovalov
@ 2023-01-30 20:49 ` andrey.konovalov
  2023-01-30 20:49 ` [PATCH 18/18] lib/stackdepot: move documentation comments to stackdepot.h andrey.konovalov
  17 siblings, 0 replies; 51+ messages in thread
From: andrey.konovalov @ 2023-01-30 20:49 UTC (permalink / raw)
  To: Marco Elver, Alexander Potapenko
  Cc: Andrey Konovalov, Vlastimil Babka, kasan-dev, Evgenii Stepanov,
	Andrew Morton, linux-mm, linux-kernel, Andrey Konovalov

From: Andrey Konovalov <andreyknvl@google.com>

Clean up comments in include/linux/stackdepot.h and lib/stackdepot.c:

1. Rework the initialization comment in stackdepot.h.
2. Rework the header comment in stackdepot.c.
3. Various clean-ups for other comments.

Also adjust whitespaces for find_stack and depot_alloc_stack call sites.

No functional changes.

Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
 include/linux/stackdepot.h |  36 +++++------
 lib/stackdepot.c           | 120 ++++++++++++++++++-------------------
 2 files changed, 78 insertions(+), 78 deletions(-)

diff --git a/include/linux/stackdepot.h b/include/linux/stackdepot.h
index f999811c66d7..173740987d8b 100644
--- a/include/linux/stackdepot.h
+++ b/include/linux/stackdepot.h
@@ -1,11 +1,11 @@
 /* SPDX-License-Identifier: GPL-2.0-or-later */
 /*
- * A generic stack depot implementation
+ * Stack depot - a stack trace storage that avoids duplication.
  *
  * Author: Alexander Potapenko <glider@google.com>
  * Copyright (C) 2016 Google, Inc.
  *
- * Based on code by Dmitry Chernenkov.
+ * Based on the code by Dmitry Chernenkov.
  */
 
 #ifndef _LINUX_STACKDEPOT_H
@@ -17,35 +17,37 @@ typedef u32 depot_stack_handle_t;
 
 /*
  * Number of bits in the handle that stack depot doesn't use. Users may store
- * information in them.
+ * information in them via stack_depot_set/get_extra_bits.
  */
 #define STACK_DEPOT_EXTRA_BITS 5
 
 /*
- * Every user of stack depot has to call stack_depot_init() during its own init
- * when it's decided that it will be calling stack_depot_save() later. This is
- * recommended for e.g. modules initialized later in the boot process, when
- * slab_is_available() is true.
+ * Using stack depot requires its initialization, which can be done in 3 ways:
  *
- * The alternative is to select STACKDEPOT_ALWAYS_INIT to have stack depot
- * enabled as part of mm_init(), for subsystems where it's known at compile time
- * that stack depot will be used.
+ * 1. Selecting CONFIG_STACKDEPOT_ALWAYS_INIT. This option is suitable in
+ *    scenarios where it's known at compile time that stack depot will be used.
+ *    Enabling this config makes the kernel initialize stack depot in mm_init().
  *
- * Another alternative is to call stack_depot_request_early_init(), when the
- * decision to use stack depot is taken e.g. when evaluating kernel boot
- * parameters, which precedes the enablement point in mm_init().
+ * 2. Calling stack_depot_request_early_init() during early boot, before
+ *    stack_depot_early_init() in mm_init() completes. For example, this can
+ *    be done when evaluating kernel boot parameters.
+ *
+ * 3. Calling stack_depot_init(). Possible after boot is complete. This option
+ *    is recommended for modules initialized later in the boot process, after
+ *    mm_init() completes.
  *
  * stack_depot_init() and stack_depot_request_early_init() can be called
- * regardless of CONFIG_STACKDEPOT and are no-op when disabled. The actual
- * save/fetch/print functions should only be called from code that makes sure
- * CONFIG_STACKDEPOT is enabled.
+ * regardless of whether CONFIG_STACKDEPOT is enabled and are no-op when this
+ * config is disabled. The save/fetch/print stack depot functions can only be
+ * called from the code that makes sure CONFIG_STACKDEPOT is enabled _and_
+ * initializes stack depot via one of the ways listed above.
  */
 #ifdef CONFIG_STACKDEPOT
 int stack_depot_init(void);
 
 void __init stack_depot_request_early_init(void);
 
-/* This is supposed to be called only from mm_init() */
+/* Must be only called from mm_init(). */
 int __init stack_depot_early_init(void);
 #else
 static inline int stack_depot_init(void) { return 0; }
diff --git a/lib/stackdepot.c b/lib/stackdepot.c
index cc2fe8563af4..5128f9486ceb 100644
--- a/lib/stackdepot.c
+++ b/lib/stackdepot.c
@@ -1,22 +1,26 @@
 // SPDX-License-Identifier: GPL-2.0-only
 /*
- * Generic stack depot for storing stack traces.
+ * Stack depot - a stack trace storage that avoids duplication.
  *
- * Some debugging tools need to save stack traces of certain events which can
- * be later presented to the user. For example, KASAN needs to safe alloc and
- * free stacks for each object, but storing two stack traces per object
- * requires too much memory (e.g. SLUB_DEBUG needs 256 bytes per object for
- * that).
+ * Stack depot is intended to be used by subsystems that need to store and
+ * later retrieve many potentially duplicated stack traces without wasting
+ * memory.
  *
- * Instead, stack depot maintains a hashtable of unique stacktraces. Since alloc
- * and free stacks repeat a lot, we save about 100x space.
- * Stacks are never removed from depot, so we store them contiguously one after
- * another in a contiguous memory allocation.
+ * For example, KASAN needs to save allocation and free stack traces for each
+ * object. Storing two stack traces per object requires a lot of memory (e.g.
+ * SLUB_DEBUG needs 256 bytes per object for that). Since allocation and free
+ * stack traces often repeat, using stack depot allows to save about 100x space.
+ *
+ * Internally, stack depot maintains a hash table of unique stacktraces. The
+ * stack traces themselves are stored contiguously one after another in a set
+ * of separate page allocations.
+ *
+ * Stack traces are never removed from stack depot.
  *
  * Author: Alexander Potapenko <glider@google.com>
  * Copyright (C) 2016 Google, Inc.
  *
- * Based on code by Dmitry Chernenkov.
+ * Based on the code by Dmitry Chernenkov.
  */
 
 #define pr_fmt(fmt) "stackdepot: " fmt
@@ -50,7 +54,7 @@
 	(((1LL << (DEPOT_SLAB_INDEX_BITS)) < DEPOT_SLABS_CAP) ? \
 	 (1LL << (DEPOT_SLAB_INDEX_BITS)) : DEPOT_SLABS_CAP)
 
-/* The compact structure to store the reference to stacks. */
+/* Compact structure that stores a reference to a stack. */
 union handle_parts {
 	depot_stack_handle_t handle;
 	struct {
@@ -62,11 +66,11 @@ union handle_parts {
 };
 
 struct stack_record {
-	struct stack_record *next;	/* Link in the hashtable */
-	u32 hash;			/* Hash in the hastable */
-	u32 size;			/* Number of frames in the stack */
+	struct stack_record *next;	/* Link in the hash table */
+	u32 hash;			/* Hash in the hash table */
+	u32 size;			/* Number of stored frames */
 	union handle_parts handle;
-	unsigned long entries[];	/* Variable-sized array of entries. */
+	unsigned long entries[];	/* Variable-sized array of frames */
 };
 
 static bool stack_depot_disabled;
@@ -305,7 +309,7 @@ depot_alloc_stack(unsigned long *entries, int size, u32 hash, void **prealloc)
 	return stack;
 }
 
-/* Calculate hash for a stack */
+/* Calculates the hash for a stack. */
 static inline u32 hash_stack(unsigned long *entries, unsigned int size)
 {
 	return jhash2((u32 *)entries,
@@ -313,9 +317,9 @@ static inline u32 hash_stack(unsigned long *entries, unsigned int size)
 		      STACK_HASH_SEED);
 }
 
-/* Use our own, non-instrumented version of memcmp().
- *
- * We actually don't care about the order, just the equality.
+/*
+ * Non-instrumented version of memcmp().
+ * Does not check the lexicographical order, only the equality.
  */
 static inline
 int stackdepot_memcmp(const unsigned long *u1, const unsigned long *u2,
@@ -328,7 +332,7 @@ int stackdepot_memcmp(const unsigned long *u1, const unsigned long *u2,
 	return 0;
 }
 
-/* Find a stack that is equal to the one stored in entries in the hash */
+/* Finds a stack in a bucket of the hash table. */
 static inline struct stack_record *find_stack(struct stack_record *bucket,
 					     unsigned long *entries, int size,
 					     u32 hash)
@@ -345,27 +349,27 @@ static inline struct stack_record *find_stack(struct stack_record *bucket,
 }
 
 /**
- * __stack_depot_save - Save a stack trace from an array
+ * __stack_depot_save - Save a stack trace to stack depot
  *
- * @entries:		Pointer to storage array
- * @nr_entries:		Size of the storage array
- * @alloc_flags:	Allocation gfp flags
+ * @entries:		Pointer to the stack trace
+ * @nr_entries:		Number of frames in the stack
+ * @alloc_flags:	Allocation GFP flags
  * @can_alloc:		Allocate stack slabs (increased chance of failure if false)
  *
  * Saves a stack trace from @entries array of size @nr_entries. If @can_alloc is
- * %true, is allowed to replenish the stack slab pool in case no space is left
+ * %true, stack depot can replenish the stack slab pool in case no space is left
  * (allocates using GFP flags of @alloc_flags). If @can_alloc is %false, avoids
- * any allocations and will fail if no space is left to store the stack trace.
+ * any allocations and fails if no space is left to store the stack trace.
  *
- * If the stack trace in @entries is from an interrupt, only the portion up to
- * interrupt entry is saved.
+ * If the provided stack trace comes from the interrupt context, only the part
+ * up to the interrupt entry is saved.
  *
  * Context: Any context, but setting @can_alloc to %false is required if
  *          alloc_pages() cannot be used from the current context. Currently
- *          this is the case from contexts where neither %GFP_ATOMIC nor
+ *          this is the case for contexts where neither %GFP_ATOMIC nor
  *          %GFP_NOWAIT can be used (NMI, raw_spin_lock).
  *
- * Return: The handle of the stack struct stored in depot, 0 on failure.
+ * Return: Handle of the stack struct stored in depot, 0 on failure
  */
 depot_stack_handle_t __stack_depot_save(unsigned long *entries,
 					unsigned int nr_entries,
@@ -380,11 +384,11 @@ depot_stack_handle_t __stack_depot_save(unsigned long *entries,
 
 	/*
 	 * If this stack trace is from an interrupt, including anything before
-	 * interrupt entry usually leads to unbounded stackdepot growth.
+	 * interrupt entry usually leads to unbounded stack depot growth.
 	 *
-	 * Because use of filter_irq_stacks() is a requirement to ensure
-	 * stackdepot can efficiently deduplicate interrupt stacks, always
-	 * filter_irq_stacks() to simplify all callers' use of stackdepot.
+	 * Since use of filter_irq_stacks() is a requirement to ensure stack
+	 * depot can efficiently deduplicate interrupt stacks, always
+	 * filter_irq_stacks() to simplify all callers' use of stack depot.
 	 */
 	nr_entries = filter_irq_stacks(entries, nr_entries);
 
@@ -399,8 +403,7 @@ depot_stack_handle_t __stack_depot_save(unsigned long *entries,
 	 * The smp_load_acquire() here pairs with smp_store_release() to
 	 * |bucket| below.
 	 */
-	found = find_stack(smp_load_acquire(bucket), entries,
-			   nr_entries, hash);
+	found = find_stack(smp_load_acquire(bucket), entries, nr_entries, hash);
 	if (found)
 		goto exit;
 
@@ -430,7 +433,8 @@ depot_stack_handle_t __stack_depot_save(unsigned long *entries,
 
 	found = find_stack(*bucket, entries, nr_entries, hash);
 	if (!found) {
-		struct stack_record *new = depot_alloc_stack(entries, nr_entries, hash, &prealloc);
+		struct stack_record *new =
+			depot_alloc_stack(entries, nr_entries, hash, &prealloc);
 
 		if (new) {
 			new->next = *bucket;
@@ -443,8 +447,8 @@ depot_stack_handle_t __stack_depot_save(unsigned long *entries,
 		}
 	} else if (prealloc) {
 		/*
-		 * We didn't need to store this stack trace, but let's keep
-		 * the preallocated memory for the future.
+		 * Stack depot already contains this stack trace, but let's
+		 * keep the preallocated memory for the future.
 		 */
 		depot_init_slab(&prealloc);
 	}
@@ -452,7 +456,7 @@ depot_stack_handle_t __stack_depot_save(unsigned long *entries,
 	raw_spin_unlock_irqrestore(&slab_lock, flags);
 exit:
 	if (prealloc) {
-		/* Nobody used this memory, ok to free it. */
+		/* Stack depot didn't use this memory, free it. */
 		free_pages((unsigned long)prealloc, DEPOT_SLAB_ORDER);
 	}
 	if (found)
@@ -463,16 +467,16 @@ depot_stack_handle_t __stack_depot_save(unsigned long *entries,
 EXPORT_SYMBOL_GPL(__stack_depot_save);
 
 /**
- * stack_depot_save - Save a stack trace from an array
+ * stack_depot_save - Save a stack trace to stack depot
  *
- * @entries:		Pointer to storage array
- * @nr_entries:		Size of the storage array
- * @alloc_flags:	Allocation gfp flags
+ * @entries:		Pointer to the stack trace
+ * @nr_entries:		Number of frames in the stack
+ * @alloc_flags:	Allocation GFP flags
  *
  * Context: Contexts where allocations via alloc_pages() are allowed.
  *          See __stack_depot_save() for more details.
  *
- * Return: The handle of the stack struct stored in depot, 0 on failure.
+ * Return: Handle of the stack trace stored in depot, 0 on failure
  */
 depot_stack_handle_t stack_depot_save(unsigned long *entries,
 				      unsigned int nr_entries,
@@ -483,13 +487,12 @@ depot_stack_handle_t stack_depot_save(unsigned long *entries,
 EXPORT_SYMBOL_GPL(stack_depot_save);
 
 /**
- * stack_depot_fetch - Fetch stack entries from a depot
+ * stack_depot_fetch - Fetch a stack trace from stack depot
  *
- * @handle:		Stack depot handle which was returned from
- *			stack_depot_save().
- * @entries:		Pointer to store the entries address
+ * @handle:	Stack depot handle returned from stack_depot_save()
+ * @entries:	Pointer to store the address of the stack trace
  *
- * Return: The number of trace entries for this depot.
+ * Return: Number of frames for the fetched stack
  */
 unsigned int stack_depot_fetch(depot_stack_handle_t handle,
 			       unsigned long **entries)
@@ -521,11 +524,9 @@ unsigned int stack_depot_fetch(depot_stack_handle_t handle,
 EXPORT_SYMBOL_GPL(stack_depot_fetch);
 
 /**
- * stack_depot_print - print stack entries from a depot
- *
- * @stack:		Stack depot handle which was returned from
- *			stack_depot_save().
+ * stack_depot_print - Print a stack trace from stack depot
  *
+ * @stack:	Stack depot handle returned from stack_depot_save()
  */
 void stack_depot_print(depot_stack_handle_t stack)
 {
@@ -539,17 +540,14 @@ void stack_depot_print(depot_stack_handle_t stack)
 EXPORT_SYMBOL_GPL(stack_depot_print);
 
 /**
- * stack_depot_snprint - print stack entries from a depot into a buffer
+ * stack_depot_snprint - Print a stack trace from stack depot into a buffer
  *
- * @handle:	Stack depot handle which was returned from
- *		stack_depot_save().
+ * @handle:	Stack depot handle returned from stack_depot_save()
  * @buf:	Pointer to the print buffer
- *
  * @size:	Size of the print buffer
- *
  * @spaces:	Number of leading spaces to print
  *
- * Return:	Number of bytes printed.
+ * Return:	Number of bytes printed
  */
 int stack_depot_snprint(depot_stack_handle_t handle, char *buf, size_t size,
 		       int spaces)
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [PATCH 18/18] lib/stackdepot: move documentation comments to stackdepot.h
  2023-01-30 20:49 [PATCH 00/18] lib/stackdepot: fixes and clean-ups andrey.konovalov
                   ` (16 preceding siblings ...)
  2023-01-30 20:49 ` [PATCH 17/18] lib/stackdepot: various comments clean-ups andrey.konovalov
@ 2023-01-30 20:49 ` andrey.konovalov
  17 siblings, 0 replies; 51+ messages in thread
From: andrey.konovalov @ 2023-01-30 20:49 UTC (permalink / raw)
  To: Marco Elver, Alexander Potapenko
  Cc: Andrey Konovalov, Vlastimil Babka, kasan-dev, Evgenii Stepanov,
	Andrew Morton, linux-mm, linux-kernel, Andrey Konovalov

From: Andrey Konovalov <andreyknvl@google.com>

Move all interface- and usage-related documentation comments to
include/linux/stackdepot.h.

It makes sense to have them in the header where they are available to
the interface users.

Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
 include/linux/stackdepot.h | 87 ++++++++++++++++++++++++++++++++++++++
 lib/stackdepot.c           | 87 --------------------------------------
 2 files changed, 87 insertions(+), 87 deletions(-)

diff --git a/include/linux/stackdepot.h b/include/linux/stackdepot.h
index 173740987d8b..a828fbece1ba 100644
--- a/include/linux/stackdepot.h
+++ b/include/linux/stackdepot.h
@@ -2,6 +2,17 @@
 /*
  * Stack depot - a stack trace storage that avoids duplication.
  *
+ * Stack depot is intended to be used by subsystems that need to store and
+ * later retrieve many potentially duplicated stack traces without wasting
+ * memory.
+ *
+ * For example, KASAN needs to save allocation and free stack traces for each
+ * object. Storing two stack traces per object requires a lot of memory (e.g.
+ * SLUB_DEBUG needs 256 bytes per object for that). Since allocation and free
+ * stack traces often repeat, using stack depot allows to save about 100x space.
+ *
+ * Stack traces are never removed from stack depot.
+ *
  * Author: Alexander Potapenko <glider@google.com>
  * Copyright (C) 2016 Google, Inc.
  *
@@ -57,24 +68,100 @@ static inline void stack_depot_request_early_init(void) { }
 static inline int stack_depot_early_init(void)	{ return 0; }
 #endif
 
+/**
+ * __stack_depot_save - Save a stack trace to stack depot
+ *
+ * @entries:		Pointer to the stack trace
+ * @nr_entries:		Number of frames in the stack
+ * @alloc_flags:	Allocation GFP flags
+ * @can_alloc:		Allocate stack slabs (increased chance of failure if false)
+ *
+ * Saves a stack trace from @entries array of size @nr_entries. If @can_alloc is
+ * %true, stack depot can replenish the stack slab pool in case no space is left
+ * (allocates using GFP flags of @alloc_flags). If @can_alloc is %false, avoids
+ * any allocations and fails if no space is left to store the stack trace.
+ *
+ * If the provided stack trace comes from the interrupt context, only the part
+ * up to the interrupt entry is saved.
+ *
+ * Context: Any context, but setting @can_alloc to %false is required if
+ *          alloc_pages() cannot be used from the current context. Currently
+ *          this is the case for contexts where neither %GFP_ATOMIC nor
+ *          %GFP_NOWAIT can be used (NMI, raw_spin_lock).
+ *
+ * Return: Handle of the stack struct stored in depot, 0 on failure
+ */
 depot_stack_handle_t __stack_depot_save(unsigned long *entries,
 					unsigned int nr_entries,
 					gfp_t gfp_flags, bool can_alloc);
 
+/**
+ * stack_depot_save - Save a stack trace to stack depot
+ *
+ * @entries:		Pointer to the stack trace
+ * @nr_entries:		Number of frames in the stack
+ * @alloc_flags:	Allocation GFP flags
+ *
+ * Context: Contexts where allocations via alloc_pages() are allowed.
+ *          See __stack_depot_save() for more details.
+ *
+ * Return: Handle of the stack trace stored in depot, 0 on failure
+ */
 depot_stack_handle_t stack_depot_save(unsigned long *entries,
 				      unsigned int nr_entries, gfp_t gfp_flags);
 
+/**
+ * stack_depot_fetch - Fetch a stack trace from stack depot
+ *
+ * @handle:	Stack depot handle returned from stack_depot_save()
+ * @entries:	Pointer to store the address of the stack trace
+ *
+ * Return: Number of frames for the fetched stack
+ */
 unsigned int stack_depot_fetch(depot_stack_handle_t handle,
 			       unsigned long **entries);
 
+/**
+ * stack_depot_print - Print a stack trace from stack depot
+ *
+ * @stack:	Stack depot handle returned from stack_depot_save()
+ */
 void stack_depot_print(depot_stack_handle_t stack);
 
+/**
+ * stack_depot_snprint - Print a stack trace from stack depot into a buffer
+ *
+ * @handle:	Stack depot handle returned from stack_depot_save()
+ * @buf:	Pointer to the print buffer
+ * @size:	Size of the print buffer
+ * @spaces:	Number of leading spaces to print
+ *
+ * Return:	Number of bytes printed
+ */
 int stack_depot_snprint(depot_stack_handle_t handle, char *buf, size_t size,
 		       int spaces);
 
+/**
+ * stack_depot_set_extra_bits - Set extra bits in a stack depot handle
+ *
+ * @handle:	Stack depot handle
+ * @extra_bits:	Value to set the extra bits
+ *
+ * Return: Stack depot handle with extra bits set
+ *
+ * Stack depot handles have a few unused bits, which can be used for storing
+ * user-specific information. These bits are transparent to the stack depot.
+ */
 depot_stack_handle_t stack_depot_set_extra_bits(depot_stack_handle_t handle,
 						unsigned int extra_bits);
 
+/**
+ * stack_depot_get_extra_bits - Retrieve extra bits from a stack depot handle
+ *
+ * @handle:	Stack depot handle with extra bits saved
+ *
+ * Return: Extra bits retrieved from the stack depot handle
+ */
 unsigned int stack_depot_get_extra_bits(depot_stack_handle_t handle);
 
 #endif
diff --git a/lib/stackdepot.c b/lib/stackdepot.c
index 5128f9486ceb..06bea439d748 100644
--- a/lib/stackdepot.c
+++ b/lib/stackdepot.c
@@ -2,21 +2,10 @@
 /*
  * Stack depot - a stack trace storage that avoids duplication.
  *
- * Stack depot is intended to be used by subsystems that need to store and
- * later retrieve many potentially duplicated stack traces without wasting
- * memory.
- *
- * For example, KASAN needs to save allocation and free stack traces for each
- * object. Storing two stack traces per object requires a lot of memory (e.g.
- * SLUB_DEBUG needs 256 bytes per object for that). Since allocation and free
- * stack traces often repeat, using stack depot allows to save about 100x space.
- *
  * Internally, stack depot maintains a hash table of unique stacktraces. The
  * stack traces themselves are stored contiguously one after another in a set
  * of separate page allocations.
  *
- * Stack traces are never removed from stack depot.
- *
  * Author: Alexander Potapenko <glider@google.com>
  * Copyright (C) 2016 Google, Inc.
  *
@@ -348,29 +337,6 @@ static inline struct stack_record *find_stack(struct stack_record *bucket,
 	return NULL;
 }
 
-/**
- * __stack_depot_save - Save a stack trace to stack depot
- *
- * @entries:		Pointer to the stack trace
- * @nr_entries:		Number of frames in the stack
- * @alloc_flags:	Allocation GFP flags
- * @can_alloc:		Allocate stack slabs (increased chance of failure if false)
- *
- * Saves a stack trace from @entries array of size @nr_entries. If @can_alloc is
- * %true, stack depot can replenish the stack slab pool in case no space is left
- * (allocates using GFP flags of @alloc_flags). If @can_alloc is %false, avoids
- * any allocations and fails if no space is left to store the stack trace.
- *
- * If the provided stack trace comes from the interrupt context, only the part
- * up to the interrupt entry is saved.
- *
- * Context: Any context, but setting @can_alloc to %false is required if
- *          alloc_pages() cannot be used from the current context. Currently
- *          this is the case for contexts where neither %GFP_ATOMIC nor
- *          %GFP_NOWAIT can be used (NMI, raw_spin_lock).
- *
- * Return: Handle of the stack struct stored in depot, 0 on failure
- */
 depot_stack_handle_t __stack_depot_save(unsigned long *entries,
 					unsigned int nr_entries,
 					gfp_t alloc_flags, bool can_alloc)
@@ -466,18 +432,6 @@ depot_stack_handle_t __stack_depot_save(unsigned long *entries,
 }
 EXPORT_SYMBOL_GPL(__stack_depot_save);
 
-/**
- * stack_depot_save - Save a stack trace to stack depot
- *
- * @entries:		Pointer to the stack trace
- * @nr_entries:		Number of frames in the stack
- * @alloc_flags:	Allocation GFP flags
- *
- * Context: Contexts where allocations via alloc_pages() are allowed.
- *          See __stack_depot_save() for more details.
- *
- * Return: Handle of the stack trace stored in depot, 0 on failure
- */
 depot_stack_handle_t stack_depot_save(unsigned long *entries,
 				      unsigned int nr_entries,
 				      gfp_t alloc_flags)
@@ -486,14 +440,6 @@ depot_stack_handle_t stack_depot_save(unsigned long *entries,
 }
 EXPORT_SYMBOL_GPL(stack_depot_save);
 
-/**
- * stack_depot_fetch - Fetch a stack trace from stack depot
- *
- * @handle:	Stack depot handle returned from stack_depot_save()
- * @entries:	Pointer to store the address of the stack trace
- *
- * Return: Number of frames for the fetched stack
- */
 unsigned int stack_depot_fetch(depot_stack_handle_t handle,
 			       unsigned long **entries)
 {
@@ -523,11 +469,6 @@ unsigned int stack_depot_fetch(depot_stack_handle_t handle,
 }
 EXPORT_SYMBOL_GPL(stack_depot_fetch);
 
-/**
- * stack_depot_print - Print a stack trace from stack depot
- *
- * @stack:	Stack depot handle returned from stack_depot_save()
- */
 void stack_depot_print(depot_stack_handle_t stack)
 {
 	unsigned long *entries;
@@ -539,16 +480,6 @@ void stack_depot_print(depot_stack_handle_t stack)
 }
 EXPORT_SYMBOL_GPL(stack_depot_print);
 
-/**
- * stack_depot_snprint - Print a stack trace from stack depot into a buffer
- *
- * @handle:	Stack depot handle returned from stack_depot_save()
- * @buf:	Pointer to the print buffer
- * @size:	Size of the print buffer
- * @spaces:	Number of leading spaces to print
- *
- * Return:	Number of bytes printed
- */
 int stack_depot_snprint(depot_stack_handle_t handle, char *buf, size_t size,
 		       int spaces)
 {
@@ -561,17 +492,6 @@ int stack_depot_snprint(depot_stack_handle_t handle, char *buf, size_t size,
 }
 EXPORT_SYMBOL_GPL(stack_depot_snprint);
 
-/**
- * stack_depot_set_extra_bits - Set extra bits in a stack depot handle
- *
- * @handle:	Stack depot handle
- * @extra_bits:	Value to set the extra bits
- *
- * Return: Stack depot handle with extra bits set
- *
- * Stack depot handles have a few unused bits, which can be used for storing
- * user-specific information. These bits are transparent to the stack depot.
- */
 depot_stack_handle_t stack_depot_set_extra_bits(depot_stack_handle_t handle,
 						unsigned int extra_bits)
 {
@@ -582,13 +502,6 @@ depot_stack_handle_t stack_depot_set_extra_bits(depot_stack_handle_t handle,
 }
 EXPORT_SYMBOL(stack_depot_set_extra_bits);
 
-/**
- * stack_depot_get_extra_bits - Retrieve extra bits from a stack depot handle
- *
- * @handle:	Stack depot handle with extra bits saved
- *
- * Return: Extra bits retrieved from the stack depot handle
- */
 unsigned int stack_depot_get_extra_bits(depot_stack_handle_t handle)
 {
 	union handle_parts parts = { .handle = handle };
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 51+ messages in thread

* Re: [PATCH 01/18] lib/stackdepot: fix setting next_slab_inited in init_stack_slab
  2023-01-30 20:49 ` [PATCH 01/18] lib/stackdepot: fix setting next_slab_inited in init_stack_slab andrey.konovalov
@ 2023-01-31  0:18   ` Andrew Morton
  2023-01-31 19:00     ` Andrey Konovalov
  2023-01-31  9:07   ` Alexander Potapenko
  2023-01-31  9:29   ` Alexander Potapenko
  2 siblings, 1 reply; 51+ messages in thread
From: Andrew Morton @ 2023-01-31  0:18 UTC (permalink / raw)
  To: andrey.konovalov
  Cc: Marco Elver, Alexander Potapenko, Andrey Konovalov,
	Vlastimil Babka, kasan-dev, Evgenii Stepanov, linux-mm,
	linux-kernel, Andrey Konovalov

On Mon, 30 Jan 2023 21:49:25 +0100 andrey.konovalov@linux.dev wrote:

> In commit 305e519ce48e ("lib/stackdepot.c: fix global out-of-bounds in
> stack_slabs"), init_stack_slab was changed to only use preallocated
> memory for the next slab if the slab number limit is not reached.
> However, setting next_slab_inited was not moved together with updating
> stack_slabs.
> 
> Set next_slab_inited only if the preallocated memory was used for the
> next slab.

Please provide a full description of the user-visible runtime effects
of the bug (always always).

I'll add the cc:stable (per your comments in the [0/N] cover letter),
but it's more reliable to add it to the changelog yourself.

As to when I upstream this: don't know - that depends on the
user-visible-effects thing.


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 16/18] lib/stackdepot: annotate racy slab_index accesses
  2023-01-30 20:49 ` [PATCH 16/18] lib/stackdepot: annotate racy slab_index accesses andrey.konovalov
@ 2023-01-31  8:40   ` Marco Elver
  2023-01-31 18:57     ` Andrey Konovalov
  0 siblings, 1 reply; 51+ messages in thread
From: Marco Elver @ 2023-01-31  8:40 UTC (permalink / raw)
  To: andrey.konovalov
  Cc: Alexander Potapenko, Andrey Konovalov, Vlastimil Babka,
	kasan-dev, Evgenii Stepanov, Andrew Morton, linux-mm,
	linux-kernel, Andrey Konovalov

On Mon, 30 Jan 2023 at 21:51, <andrey.konovalov@linux.dev> wrote:
>
> From: Andrey Konovalov <andreyknvl@google.com>
>
> Accesses to slab_index are protected by slab_lock everywhere except
> in a sanity check in stack_depot_fetch. The read access there can race
> with the write access in depot_alloc_stack.
>
> Use WRITE/READ_ONCE() to annotate the racy accesses.
>
> As the sanity check is only used to print a warning in case of a
> violation of the stack depot interface usage, it does not make a lot
> of sense to use proper synchronization.
>
> Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
> ---
>  lib/stackdepot.c | 13 +++++++++----
>  1 file changed, 9 insertions(+), 4 deletions(-)
>
> diff --git a/lib/stackdepot.c b/lib/stackdepot.c
> index f291ad6a4e72..cc2fe8563af4 100644
> --- a/lib/stackdepot.c
> +++ b/lib/stackdepot.c
> @@ -269,8 +269,11 @@ depot_alloc_stack(unsigned long *entries, int size, u32 hash, void **prealloc)
>                         return NULL;
>                 }
>
> -               /* Move on to the next slab. */
> -               slab_index++;
> +               /*
> +                * Move on to the next slab.
> +                * WRITE_ONCE annotates a race with stack_depot_fetch.

"Pairs with potential concurrent read in stack_depot_fetch()." would be clearer.

I wouldn't say WRITE_ONCE annotates a race (race = involves 2+
accesses, but here's just 1), it just marks this access here which
itself is paired with the potential racing read in the other function.

> +                */
> +               WRITE_ONCE(slab_index, slab_index + 1);
>                 slab_offset = 0;
>                 /*
>                  * smp_store_release() here pairs with smp_load_acquire() in
> @@ -492,6 +495,8 @@ unsigned int stack_depot_fetch(depot_stack_handle_t handle,
>                                unsigned long **entries)
>  {
>         union handle_parts parts = { .handle = handle };
> +       /* READ_ONCE annotates a race with depot_alloc_stack. */
> +       int slab_index_cached = READ_ONCE(slab_index);
>         void *slab;
>         size_t offset = parts.offset << DEPOT_STACK_ALIGN;
>         struct stack_record *stack;
> @@ -500,9 +505,9 @@ unsigned int stack_depot_fetch(depot_stack_handle_t handle,
>         if (!handle)
>                 return 0;
>
> -       if (parts.slab_index > slab_index) {
> +       if (parts.slab_index > slab_index_cached) {
>                 WARN(1, "slab index %d out of bounds (%d) for stack id %08x\n",
> -                       parts.slab_index, slab_index, handle);
> +                       parts.slab_index, slab_index_cached, handle);
>                 return 0;
>         }
>         slab = stack_slabs[parts.slab_index];
> --
> 2.25.1
>

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 15/18] lib/stacktrace, kasan, kmsan: rework extra_bits interface
  2023-01-30 20:49 ` [PATCH 15/18] lib/stacktrace, kasan, kmsan: rework extra_bits interface andrey.konovalov
@ 2023-01-31  8:53   ` Marco Elver
  2023-01-31 18:58     ` Andrey Konovalov
  2023-02-02 10:03   ` Alexander Potapenko
  1 sibling, 1 reply; 51+ messages in thread
From: Marco Elver @ 2023-01-31  8:53 UTC (permalink / raw)
  To: andrey.konovalov
  Cc: Alexander Potapenko, Andrey Konovalov, Vlastimil Babka,
	kasan-dev, Evgenii Stepanov, Andrew Morton, linux-mm,
	linux-kernel, Andrey Konovalov

On Mon, 30 Jan 2023 at 21:51, <andrey.konovalov@linux.dev> wrote:
>
> From: Andrey Konovalov <andreyknvl@google.com>
>
> The current implementation of the extra_bits interface is confusing:
> passing extra_bits to __stack_depot_save makes it seem that the extra
> bits are somehow stored in stack depot. In reality, they are only
> embedded into a stack depot handle and are not used within stack depot.
>
> Drop the extra_bits argument from __stack_depot_save and instead provide
> a new stack_depot_set_extra_bits function (similar to the exsiting
> stack_depot_get_extra_bits) that saves extra bits into a stack depot
> handle.
>
> Update the callers of __stack_depot_save to use the new interace.
>
> This change also fixes a minor issue in the old code: __stack_depot_save
> does not return NULL if saving stack trace fails and extra_bits is used.
>
> Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
> ---
>  include/linux/stackdepot.h |  4 +++-
>  lib/stackdepot.c           | 38 +++++++++++++++++++++++++++++---------
>  mm/kasan/common.c          |  2 +-
>  mm/kmsan/core.c            | 10 +++++++---
>  4 files changed, 40 insertions(+), 14 deletions(-)
>
> diff --git a/include/linux/stackdepot.h b/include/linux/stackdepot.h
> index c4e3abc16b16..f999811c66d7 100644
> --- a/include/linux/stackdepot.h
> +++ b/include/linux/stackdepot.h
> @@ -57,7 +57,6 @@ static inline int stack_depot_early_init(void)        { return 0; }
>
>  depot_stack_handle_t __stack_depot_save(unsigned long *entries,
>                                         unsigned int nr_entries,
> -                                       unsigned int extra_bits,
>                                         gfp_t gfp_flags, bool can_alloc);
>
>  depot_stack_handle_t stack_depot_save(unsigned long *entries,
> @@ -71,6 +70,9 @@ void stack_depot_print(depot_stack_handle_t stack);
>  int stack_depot_snprint(depot_stack_handle_t handle, char *buf, size_t size,
>                        int spaces);
>
> +depot_stack_handle_t stack_depot_set_extra_bits(depot_stack_handle_t handle,
> +                                               unsigned int extra_bits);

Can you add __must_check to this function? Either that or making
handle an in/out param, as otherwise it might be easy to think that it
doesn't return anything ("set_foo()" seems like it sets the
information in the handle-associated data but not handle itself ... in
case someone missed the documentation).

>  unsigned int stack_depot_get_extra_bits(depot_stack_handle_t handle);
>
>  #endif
> diff --git a/lib/stackdepot.c b/lib/stackdepot.c
> index 7282565722f2..f291ad6a4e72 100644
> --- a/lib/stackdepot.c
> +++ b/lib/stackdepot.c
> @@ -346,7 +346,6 @@ static inline struct stack_record *find_stack(struct stack_record *bucket,
>   *
>   * @entries:           Pointer to storage array
>   * @nr_entries:                Size of the storage array
> - * @extra_bits:                Flags to store in unused bits of depot_stack_handle_t
>   * @alloc_flags:       Allocation gfp flags
>   * @can_alloc:         Allocate stack slabs (increased chance of failure if false)
>   *
> @@ -358,10 +357,6 @@ static inline struct stack_record *find_stack(struct stack_record *bucket,
>   * If the stack trace in @entries is from an interrupt, only the portion up to
>   * interrupt entry is saved.
>   *
> - * Additional opaque flags can be passed in @extra_bits, stored in the unused
> - * bits of the stack handle, and retrieved using stack_depot_get_extra_bits()
> - * without calling stack_depot_fetch().
> - *
>   * Context: Any context, but setting @can_alloc to %false is required if
>   *          alloc_pages() cannot be used from the current context. Currently
>   *          this is the case from contexts where neither %GFP_ATOMIC nor
> @@ -371,7 +366,6 @@ static inline struct stack_record *find_stack(struct stack_record *bucket,
>   */
>  depot_stack_handle_t __stack_depot_save(unsigned long *entries,
>                                         unsigned int nr_entries,
> -                                       unsigned int extra_bits,
>                                         gfp_t alloc_flags, bool can_alloc)
>  {
>         struct stack_record *found = NULL, **bucket;
> @@ -461,8 +455,6 @@ depot_stack_handle_t __stack_depot_save(unsigned long *entries,
>         if (found)
>                 retval.handle = found->handle.handle;
>  fast_exit:
> -       retval.extra = extra_bits;
> -
>         return retval.handle;
>  }
>  EXPORT_SYMBOL_GPL(__stack_depot_save);
> @@ -483,7 +475,7 @@ depot_stack_handle_t stack_depot_save(unsigned long *entries,
>                                       unsigned int nr_entries,
>                                       gfp_t alloc_flags)
>  {
> -       return __stack_depot_save(entries, nr_entries, 0, alloc_flags, true);
> +       return __stack_depot_save(entries, nr_entries, alloc_flags, true);
>  }
>  EXPORT_SYMBOL_GPL(stack_depot_save);
>
> @@ -566,6 +558,34 @@ int stack_depot_snprint(depot_stack_handle_t handle, char *buf, size_t size,
>  }
>  EXPORT_SYMBOL_GPL(stack_depot_snprint);
>
> +/**
> + * stack_depot_set_extra_bits - Set extra bits in a stack depot handle
> + *
> + * @handle:    Stack depot handle
> + * @extra_bits:        Value to set the extra bits
> + *
> + * Return: Stack depot handle with extra bits set
> + *
> + * Stack depot handles have a few unused bits, which can be used for storing
> + * user-specific information. These bits are transparent to the stack depot.
> + */
> +depot_stack_handle_t stack_depot_set_extra_bits(depot_stack_handle_t handle,
> +                                               unsigned int extra_bits)
> +{
> +       union handle_parts parts = { .handle = handle };
> +
> +       parts.extra = extra_bits;
> +       return parts.handle;
> +}
> +EXPORT_SYMBOL(stack_depot_set_extra_bits);
> +
> +/**
> + * stack_depot_get_extra_bits - Retrieve extra bits from a stack depot handle
> + *
> + * @handle:    Stack depot handle with extra bits saved
> + *
> + * Return: Extra bits retrieved from the stack depot handle
> + */
>  unsigned int stack_depot_get_extra_bits(depot_stack_handle_t handle)
>  {
>         union handle_parts parts = { .handle = handle };
> diff --git a/mm/kasan/common.c b/mm/kasan/common.c
> index 833bf2cfd2a3..50f4338b477f 100644
> --- a/mm/kasan/common.c
> +++ b/mm/kasan/common.c
> @@ -43,7 +43,7 @@ depot_stack_handle_t kasan_save_stack(gfp_t flags, bool can_alloc)
>         unsigned int nr_entries;
>
>         nr_entries = stack_trace_save(entries, ARRAY_SIZE(entries), 0);
> -       return __stack_depot_save(entries, nr_entries, 0, flags, can_alloc);
> +       return __stack_depot_save(entries, nr_entries, flags, can_alloc);
>  }
>
>  void kasan_set_track(struct kasan_track *track, gfp_t flags)
> diff --git a/mm/kmsan/core.c b/mm/kmsan/core.c
> index 112dce135c7f..f710257d6867 100644
> --- a/mm/kmsan/core.c
> +++ b/mm/kmsan/core.c
> @@ -69,13 +69,15 @@ depot_stack_handle_t kmsan_save_stack_with_flags(gfp_t flags,
>  {
>         unsigned long entries[KMSAN_STACK_DEPTH];
>         unsigned int nr_entries;
> +       depot_stack_handle_t handle;
>
>         nr_entries = stack_trace_save(entries, KMSAN_STACK_DEPTH, 0);
>
>         /* Don't sleep (see might_sleep_if() in __alloc_pages_nodemask()). */
>         flags &= ~__GFP_DIRECT_RECLAIM;
>
> -       return __stack_depot_save(entries, nr_entries, extra, flags, true);
> +       handle = __stack_depot_save(entries, nr_entries, flags, true);
> +       return stack_depot_set_extra_bits(handle, extra);
>  }
>
>  /* Copy the metadata following the memmove() behavior. */
> @@ -215,6 +217,7 @@ depot_stack_handle_t kmsan_internal_chain_origin(depot_stack_handle_t id)
>         u32 extra_bits;
>         int depth;
>         bool uaf;
> +       depot_stack_handle_t handle;
>
>         if (!id)
>                 return id;
> @@ -250,8 +253,9 @@ depot_stack_handle_t kmsan_internal_chain_origin(depot_stack_handle_t id)
>          * positives when __stack_depot_save() passes it to instrumented code.
>          */
>         kmsan_internal_unpoison_memory(entries, sizeof(entries), false);
> -       return __stack_depot_save(entries, ARRAY_SIZE(entries), extra_bits,
> -                                 GFP_ATOMIC, true);
> +       handle = __stack_depot_save(entries, ARRAY_SIZE(entries), GFP_ATOMIC,
> +                                   true);
> +       return stack_depot_set_extra_bits(handle, extra_bits);
>  }
>
>  void kmsan_internal_set_shadow_origin(void *addr, size_t size, int b,
> --
> 2.25.1
>

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 01/18] lib/stackdepot: fix setting next_slab_inited in init_stack_slab
  2023-01-30 20:49 ` [PATCH 01/18] lib/stackdepot: fix setting next_slab_inited in init_stack_slab andrey.konovalov
  2023-01-31  0:18   ` Andrew Morton
@ 2023-01-31  9:07   ` Alexander Potapenko
  2023-01-31  9:29   ` Alexander Potapenko
  2 siblings, 0 replies; 51+ messages in thread
From: Alexander Potapenko @ 2023-01-31  9:07 UTC (permalink / raw)
  To: andrey.konovalov
  Cc: Marco Elver, Andrey Konovalov, Vlastimil Babka, kasan-dev,
	Evgenii Stepanov, Andrew Morton, linux-mm, linux-kernel,
	Andrey Konovalov

On Mon, Jan 30, 2023 at 9:49 PM <andrey.konovalov@linux.dev> wrote:
>
> From: Andrey Konovalov <andreyknvl@google.com>
>
> In commit 305e519ce48e ("lib/stackdepot.c: fix global out-of-bounds in
> stack_slabs"), init_stack_slab was changed to only use preallocated
> memory for the next slab if the slab number limit is not reached.
> However, setting next_slab_inited was not moved together with updating
> stack_slabs.
>
> Set next_slab_inited only if the preallocated memory was used for the
> next slab.
>
> Fixes: 305e519ce48e ("lib/stackdepot.c: fix global out-of-bounds in stack_slabs")
> Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
Reviewed-by: Alexander Potapenko <glider@google.com>

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 01/18] lib/stackdepot: fix setting next_slab_inited in init_stack_slab
  2023-01-30 20:49 ` [PATCH 01/18] lib/stackdepot: fix setting next_slab_inited in init_stack_slab andrey.konovalov
  2023-01-31  0:18   ` Andrew Morton
  2023-01-31  9:07   ` Alexander Potapenko
@ 2023-01-31  9:29   ` Alexander Potapenko
  2023-01-31 18:59     ` Andrey Konovalov
  2 siblings, 1 reply; 51+ messages in thread
From: Alexander Potapenko @ 2023-01-31  9:29 UTC (permalink / raw)
  To: andrey.konovalov
  Cc: Marco Elver, Andrey Konovalov, Vlastimil Babka, kasan-dev,
	Evgenii Stepanov, Andrew Morton, linux-mm, linux-kernel,
	Andrey Konovalov

On Mon, Jan 30, 2023 at 9:49 PM <andrey.konovalov@linux.dev> wrote:
>
> From: Andrey Konovalov <andreyknvl@google.com>
>
> In commit 305e519ce48e ("lib/stackdepot.c: fix global out-of-bounds in
> stack_slabs"), init_stack_slab was changed to only use preallocated
> memory for the next slab if the slab number limit is not reached.
> However, setting next_slab_inited was not moved together with updating
> stack_slabs.
>
> Set next_slab_inited only if the preallocated memory was used for the
> next slab.
>
> Fixes: 305e519ce48e ("lib/stackdepot.c: fix global out-of-bounds in stack_slabs")
> Signed-off-by: Andrey Konovalov <andreyknvl@google.com>

Wait, I think there's a problem here.

> diff --git a/lib/stackdepot.c b/lib/stackdepot.c
> index 79e894cf8406..0eed9bbcf23e 100644
> --- a/lib/stackdepot.c
> +++ b/lib/stackdepot.c
> @@ -105,12 +105,13 @@ static bool init_stack_slab(void **prealloc)
>                 if (depot_index + 1 < STACK_ALLOC_MAX_SLABS) {
If we get to this branch, but the condition is false, this means that:
 - next_slab_inited == 0
 - depot_index == STACK_ALLOC_MAX_SLABS+1
 - stack_slabs[depot_index] != NULL.

So stack_slabs[] is at full capacity, but upon leaving
init_stack_slab() we'll always keep next_slab_inited==0.

Now every time __stack_depot_save() is called for a known stack trace,
it will preallocate 1<<STACK_ALLOC_ORDER pages (because
next_slab_inited==0), then find the stack trace id in the hash, then
pass the preallocated pages to init_stack_slab(), which will not
change the value of next_slab_inited.
Then the preallocated pages will be freed, and next time
__stack_depot_save() is called they'll be allocated again.

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 02/18] lib/stackdepot: put functions in logical order
  2023-01-30 20:49 ` [PATCH 02/18] lib/stackdepot: put functions in logical order andrey.konovalov
@ 2023-01-31 10:20   ` Alexander Potapenko
  0 siblings, 0 replies; 51+ messages in thread
From: Alexander Potapenko @ 2023-01-31 10:20 UTC (permalink / raw)
  To: andrey.konovalov
  Cc: Marco Elver, Andrey Konovalov, Vlastimil Babka, kasan-dev,
	Evgenii Stepanov, Andrew Morton, linux-mm, linux-kernel,
	Andrey Konovalov

On Mon, Jan 30, 2023 at 9:49 PM <andrey.konovalov@linux.dev> wrote:
>
> From: Andrey Konovalov <andreyknvl@google.com>
>
> Put stack depot functions' declarations and definitions in a more logical
> order:
>
> 1. Functions that save stack traces into stack depot.
> 2. Functions that fetch and print stack traces.
> 3. stack_depot_get_extra_bits that operates on stack depot handles
>    and does not interact with the stack depot storage.
>
> No functional changes.
>
> Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
Reviewed-by: Alexander Potapenko <glider@google.com>

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 03/18] lib/stackdepot: use pr_fmt to define message format
  2023-01-30 20:49 ` [PATCH 03/18] lib/stackdepot: use pr_fmt to define message format andrey.konovalov
@ 2023-01-31 10:24   ` Alexander Potapenko
  0 siblings, 0 replies; 51+ messages in thread
From: Alexander Potapenko @ 2023-01-31 10:24 UTC (permalink / raw)
  To: andrey.konovalov
  Cc: Marco Elver, Andrey Konovalov, Vlastimil Babka, kasan-dev,
	Evgenii Stepanov, Andrew Morton, linux-mm, linux-kernel,
	Andrey Konovalov

On Mon, Jan 30, 2023 at 9:49 PM <andrey.konovalov@linux.dev> wrote:
>
> From: Andrey Konovalov <andreyknvl@google.com>
>
> Use pr_fmt to define the format for printing stack depot messages instead
> of duplicating the "Stack Depot" prefix in each message.
>
> Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
Reviewed-by: Alexander Potapenko <glider@google.com>

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 04/18] lib/stackdepot, mm: rename stack_depot_want_early_init
  2023-01-30 20:49 ` [PATCH 04/18] lib/stackdepot, mm: rename stack_depot_want_early_init andrey.konovalov
@ 2023-01-31 10:26   ` Alexander Potapenko
  2023-02-08 16:40   ` Vlastimil Babka
  1 sibling, 0 replies; 51+ messages in thread
From: Alexander Potapenko @ 2023-01-31 10:26 UTC (permalink / raw)
  To: andrey.konovalov
  Cc: Marco Elver, Andrey Konovalov, Vlastimil Babka, kasan-dev,
	Evgenii Stepanov, Andrew Morton, linux-mm, linux-kernel,
	Andrey Konovalov

On Mon, Jan 30, 2023 at 9:49 PM <andrey.konovalov@linux.dev> wrote:
>
> From: Andrey Konovalov <andreyknvl@google.com>
>
> Rename stack_depot_want_early_init to stack_depot_request_early_init.
>
> The old name is confusing, as it hints at returning some kind of intention
> of stack depot. The new name reflects that this function requests an action
> from stack depot instead.
>
> No functional changes.
>
> Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
Reviewed-by: Alexander Potapenko <glider@google.com>

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 05/18] lib/stackdepot: rename stack_depot_disable
  2023-01-30 20:49 ` [PATCH 05/18] lib/stackdepot: rename stack_depot_disable andrey.konovalov
@ 2023-01-31 10:28   ` Alexander Potapenko
  0 siblings, 0 replies; 51+ messages in thread
From: Alexander Potapenko @ 2023-01-31 10:28 UTC (permalink / raw)
  To: andrey.konovalov
  Cc: Marco Elver, Andrey Konovalov, Vlastimil Babka, kasan-dev,
	Evgenii Stepanov, Andrew Morton, linux-mm, linux-kernel,
	Andrey Konovalov

On Mon, Jan 30, 2023 at 9:49 PM <andrey.konovalov@linux.dev> wrote:
>
> From: Andrey Konovalov <andreyknvl@google.com>
>
> Rename stack_depot_disable to stack_depot_disabled to make its name look
> similar to the names of other stack depot flags.
>
> Also put stack_depot_disabled's definition together with the other flags.
>
> Also rename is_stack_depot_disabled to disable_stack_depot: this name
> looks more conventional for a function that processes a boot parameter.
>
> No functional changes.
>
> Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
Reviewed-by: Alexander Potapenko <glider@google.com>

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 06/18] lib/stackdepot: annotate init and early init functions
  2023-01-30 20:49 ` [PATCH 06/18] lib/stackdepot: annotate init and early init functions andrey.konovalov
@ 2023-01-31 10:30   ` Alexander Potapenko
  2023-01-31 19:01     ` Andrey Konovalov
  0 siblings, 1 reply; 51+ messages in thread
From: Alexander Potapenko @ 2023-01-31 10:30 UTC (permalink / raw)
  To: andrey.konovalov
  Cc: Marco Elver, Andrey Konovalov, Vlastimil Babka, kasan-dev,
	Evgenii Stepanov, Andrew Morton, linux-mm, linux-kernel,
	Andrey Konovalov

On Mon, Jan 30, 2023 at 9:50 PM <andrey.konovalov@linux.dev> wrote:
>
> From: Andrey Konovalov <andreyknvl@google.com>
>
> Add comments to stack_depot_early_init and stack_depot_init to explain
> certain parts of their implementation.
>
> Also add a pr_info message to stack_depot_early_init similar to the one
> in stack_depot_init.
>
> Also move the scale variable in stack_depot_init to the scope where it
> is being used.
>
> Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
Reviewed-by: Alexander Potapenko <glider@google.com>
...
>
> +/* Allocates a hash table via kvmalloc. Can be used after boot. */
Nit: kvcalloc? (Doesn't really matter much)

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 07/18] lib/stackdepot: lower the indentation in stack_depot_init
  2023-01-30 20:49 ` [PATCH 07/18] lib/stackdepot: lower the indentation in stack_depot_init andrey.konovalov
@ 2023-01-31 10:37   ` Alexander Potapenko
  0 siblings, 0 replies; 51+ messages in thread
From: Alexander Potapenko @ 2023-01-31 10:37 UTC (permalink / raw)
  To: andrey.konovalov
  Cc: Marco Elver, Andrey Konovalov, Vlastimil Babka, kasan-dev,
	Evgenii Stepanov, Andrew Morton, linux-mm, linux-kernel,
	Andrey Konovalov

On Mon, Jan 30, 2023 at 9:50 PM <andrey.konovalov@linux.dev> wrote:
>
> From: Andrey Konovalov <andreyknvl@google.com>
>
> stack_depot_init does most things inside an if check. Move them out and
> use a goto statement instead.
>
> No functional changes.
>
> Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
Reviewed-by: Alexander Potapenko <glider@google.com>

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 08/18] lib/stackdepot: reorder and annotate global variables
  2023-01-30 20:49 ` [PATCH 08/18] lib/stackdepot: reorder and annotate global variables andrey.konovalov
@ 2023-01-31 10:42   ` Alexander Potapenko
  2023-01-31 19:01     ` Andrey Konovalov
  0 siblings, 1 reply; 51+ messages in thread
From: Alexander Potapenko @ 2023-01-31 10:42 UTC (permalink / raw)
  To: andrey.konovalov
  Cc: Marco Elver, Andrey Konovalov, Vlastimil Babka, kasan-dev,
	Evgenii Stepanov, Andrew Morton, linux-mm, linux-kernel,
	Andrey Konovalov

On Mon, Jan 30, 2023 at 9:50 PM <andrey.konovalov@linux.dev> wrote:
>
> From: Andrey Konovalov <andreyknvl@google.com>
>
> Group stack depot global variables by their purpose:
>
> 1. Hash table-related variables,
> 2. Slab-related variables,
>
> and add comments.
>
> Also clean up comments for hash table-related constants.
>
> Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
Reviewed-by: Alexander Potapenko <glider@google.com>

...
> +/* Lock that protects the variables above. */
> +static DEFINE_RAW_SPINLOCK(depot_lock);
> +/* Whether the next slab is initialized. */
> +static int next_slab_inited;
Might be worth clarifying what happens if there's no next slab (see my
comment to patch 01).

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 09/18] lib/stackdepot: rename hash table constants and variables
  2023-01-30 20:49 ` [PATCH 09/18] lib/stackdepot: rename hash table constants and variables andrey.konovalov
@ 2023-01-31 11:33   ` Alexander Potapenko
  2023-01-31 19:01     ` Andrey Konovalov
  0 siblings, 1 reply; 51+ messages in thread
From: Alexander Potapenko @ 2023-01-31 11:33 UTC (permalink / raw)
  To: andrey.konovalov
  Cc: Marco Elver, Andrey Konovalov, Vlastimil Babka, kasan-dev,
	Evgenii Stepanov, Andrew Morton, linux-mm, linux-kernel,
	Andrey Konovalov

On Mon, Jan 30, 2023 at 9:50 PM <andrey.konovalov@linux.dev> wrote:
>
> From: Andrey Konovalov <andreyknvl@google.com>
>
> Give more meaningful names to hash table-related constants and variables:
>
> 1. Rename STACK_HASH_SCALE to STACK_TABLE_SCALE to point out that it is
>    related to scaling the hash table.

It's only used twice, and in short lines, maybe make it
STACK_HASH_TABLE_SCALE to point that out? :)

> 2. Rename STACK_HASH_ORDER_MIN/MAX to STACK_BUCKET_NUMBER_ORDER_MIN/MAX
>    to point out that it is related to the number of hash table buckets.

How about DEPOT_BUCKET_... or STACKDEPOT_BUCKET_...?
(just bikeshedding, I don't have any strong preference).

> 3. Rename stack_hash_order to stack_bucket_number_order for the same
>    reason as #2.
>
> No functional changes.
>
> Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
Reviewed-by: Alexander Potapenko <glider@google.com>

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 10/18] lib/stackdepot: rename init_stack_slab
  2023-01-30 20:49 ` [PATCH 10/18] lib/stackdepot: rename init_stack_slab andrey.konovalov
@ 2023-01-31 11:34   ` Alexander Potapenko
  0 siblings, 0 replies; 51+ messages in thread
From: Alexander Potapenko @ 2023-01-31 11:34 UTC (permalink / raw)
  To: andrey.konovalov
  Cc: Marco Elver, Andrey Konovalov, Vlastimil Babka, kasan-dev,
	Evgenii Stepanov, Andrew Morton, linux-mm, linux-kernel,
	Andrey Konovalov

On Mon, Jan 30, 2023 at 9:50 PM <andrey.konovalov@linux.dev> wrote:
>
> From: Andrey Konovalov <andreyknvl@google.com>
>
> Rename init_stack_slab to depot_init_slab to align the name with
> depot_alloc_stack.
>
> No functional changes.
>
> Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
Reviewed-by: Alexander Potapenko <glider@google.com>

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 11/18] lib/stackdepot: rename slab variables
  2023-01-30 20:49 ` [PATCH 11/18] lib/stackdepot: rename slab variables andrey.konovalov
@ 2023-01-31 11:59   ` Alexander Potapenko
  2023-01-31 19:05     ` Andrey Konovalov
  0 siblings, 1 reply; 51+ messages in thread
From: Alexander Potapenko @ 2023-01-31 11:59 UTC (permalink / raw)
  To: andrey.konovalov
  Cc: Marco Elver, Andrey Konovalov, Vlastimil Babka, kasan-dev,
	Evgenii Stepanov, Andrew Morton, linux-mm, linux-kernel,
	Andrey Konovalov

On Mon, Jan 30, 2023 at 9:50 PM <andrey.konovalov@linux.dev> wrote:
>
> From: Andrey Konovalov <andreyknvl@google.com>
>
> Give better names to slab-related global variables: change "depot_"
> prefix to "slab_" to point out that these variables are related to
> stack depot slabs.

I started asking myself if the word "slab" is applicable here at all.
The concept of preallocating big chunks of memory to amortize the
costs belongs to the original slab allocator, but "slab" has a special
meaning in Linux, and we might be confusing people by using it in a
different sense.
What do you think?

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 12/18] lib/stackdepot: rename handle and slab constants
  2023-01-30 20:49 ` [PATCH 12/18] lib/stackdepot: rename handle and slab constants andrey.konovalov
@ 2023-01-31 12:11   ` Alexander Potapenko
  0 siblings, 0 replies; 51+ messages in thread
From: Alexander Potapenko @ 2023-01-31 12:11 UTC (permalink / raw)
  To: andrey.konovalov
  Cc: Marco Elver, Andrey Konovalov, Vlastimil Babka, kasan-dev,
	Evgenii Stepanov, Andrew Morton, linux-mm, linux-kernel,
	Andrey Konovalov

On Mon, Jan 30, 2023 at 9:51 PM <andrey.konovalov@linux.dev> wrote:
>
> From: Andrey Konovalov <andreyknvl@google.com>
>
> Change the "STACK_ALLOC_" prefix to "DEPOT_" for the constants that
> define the number of bits in stack depot handles and the maximum number
> of slabs.
>
> The old prefix is unclear and makes wonder about how these constants
> are related to stack allocations. The new prefix is also shorter.
>
> Also simplify the comment for DEPOT_SLAB_ORDER.
>
> No functional changes.
>
> Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
Reviewed-by: Alexander Potapenko <glider@google.com>

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 16/18] lib/stackdepot: annotate racy slab_index accesses
  2023-01-31  8:40   ` Marco Elver
@ 2023-01-31 18:57     ` Andrey Konovalov
  2023-01-31 21:14       ` Andrew Morton
  0 siblings, 1 reply; 51+ messages in thread
From: Andrey Konovalov @ 2023-01-31 18:57 UTC (permalink / raw)
  To: Marco Elver
  Cc: andrey.konovalov, Alexander Potapenko, Vlastimil Babka,
	kasan-dev, Evgenii Stepanov, Andrew Morton, linux-mm,
	linux-kernel, Andrey Konovalov

On Tue, Jan 31, 2023 at 9:41 AM Marco Elver <elver@google.com> wrote:
>
> > diff --git a/lib/stackdepot.c b/lib/stackdepot.c
> > index f291ad6a4e72..cc2fe8563af4 100644
> > --- a/lib/stackdepot.c
> > +++ b/lib/stackdepot.c
> > @@ -269,8 +269,11 @@ depot_alloc_stack(unsigned long *entries, int size, u32 hash, void **prealloc)
> >                         return NULL;
> >                 }
> >
> > -               /* Move on to the next slab. */
> > -               slab_index++;
> > +               /*
> > +                * Move on to the next slab.
> > +                * WRITE_ONCE annotates a race with stack_depot_fetch.
>
> "Pairs with potential concurrent read in stack_depot_fetch()." would be clearer.
>
> I wouldn't say WRITE_ONCE annotates a race (race = involves 2+
> accesses, but here's just 1), it just marks this access here which
> itself is paired with the potential racing read in the other function.

Will do in v2. Thanks!

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 15/18] lib/stacktrace, kasan, kmsan: rework extra_bits interface
  2023-01-31  8:53   ` Marco Elver
@ 2023-01-31 18:58     ` Andrey Konovalov
  2023-02-02 10:04       ` Alexander Potapenko
  0 siblings, 1 reply; 51+ messages in thread
From: Andrey Konovalov @ 2023-01-31 18:58 UTC (permalink / raw)
  To: Marco Elver
  Cc: andrey.konovalov, Alexander Potapenko, Vlastimil Babka,
	kasan-dev, Evgenii Stepanov, Andrew Morton, linux-mm,
	linux-kernel, Andrey Konovalov

On Tue, Jan 31, 2023 at 9:54 AM Marco Elver <elver@google.com> wrote:
>
> > +depot_stack_handle_t stack_depot_set_extra_bits(depot_stack_handle_t handle,
> > +                                               unsigned int extra_bits);
>
> Can you add __must_check to this function? Either that or making
> handle an in/out param, as otherwise it might be easy to think that it
> doesn't return anything ("set_foo()" seems like it sets the
> information in the handle-associated data but not handle itself ... in
> case someone missed the documentation).

Makes sense, will do in v2 if Alexander doesn't object to the
interface change. Thanks!

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 01/18] lib/stackdepot: fix setting next_slab_inited in init_stack_slab
  2023-01-31  9:29   ` Alexander Potapenko
@ 2023-01-31 18:59     ` Andrey Konovalov
  2023-02-01 11:51       ` Alexander Potapenko
  0 siblings, 1 reply; 51+ messages in thread
From: Andrey Konovalov @ 2023-01-31 18:59 UTC (permalink / raw)
  To: Alexander Potapenko
  Cc: andrey.konovalov, Marco Elver, Vlastimil Babka, kasan-dev,
	Evgenii Stepanov, Andrew Morton, linux-mm, linux-kernel,
	Andrey Konovalov

On Tue, Jan 31, 2023 at 10:30 AM Alexander Potapenko <glider@google.com> wrote:
>
> Wait, I think there's a problem here.
>
> > diff --git a/lib/stackdepot.c b/lib/stackdepot.c
> > index 79e894cf8406..0eed9bbcf23e 100644
> > --- a/lib/stackdepot.c
> > +++ b/lib/stackdepot.c
> > @@ -105,12 +105,13 @@ static bool init_stack_slab(void **prealloc)
> >                 if (depot_index + 1 < STACK_ALLOC_MAX_SLABS) {
> If we get to this branch, but the condition is false, this means that:
>  - next_slab_inited == 0
>  - depot_index == STACK_ALLOC_MAX_SLABS+1
>  - stack_slabs[depot_index] != NULL.
>
> So stack_slabs[] is at full capacity, but upon leaving
> init_stack_slab() we'll always keep next_slab_inited==0.
>
> Now every time __stack_depot_save() is called for a known stack trace,
> it will preallocate 1<<STACK_ALLOC_ORDER pages (because
> next_slab_inited==0), then find the stack trace id in the hash, then
> pass the preallocated pages to init_stack_slab(), which will not
> change the value of next_slab_inited.
> Then the preallocated pages will be freed, and next time
> __stack_depot_save() is called they'll be allocated again.

Ah, right, missed that.

What do you think about renaming next_slab_inited to
next_slab_required and inverting the used values (0/1 -> 1/0)? This
would make this part of code less confusing.

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 01/18] lib/stackdepot: fix setting next_slab_inited in init_stack_slab
  2023-01-31  0:18   ` Andrew Morton
@ 2023-01-31 19:00     ` Andrey Konovalov
  0 siblings, 0 replies; 51+ messages in thread
From: Andrey Konovalov @ 2023-01-31 19:00 UTC (permalink / raw)
  To: Andrew Morton
  Cc: andrey.konovalov, Marco Elver, Alexander Potapenko,
	Vlastimil Babka, kasan-dev, Evgenii Stepanov, linux-mm,
	linux-kernel, Andrey Konovalov

On Tue, Jan 31, 2023 at 1:18 AM Andrew Morton <akpm@linux-foundation.org> wrote:
>
> On Mon, 30 Jan 2023 21:49:25 +0100 andrey.konovalov@linux.dev wrote:
>
> > In commit 305e519ce48e ("lib/stackdepot.c: fix global out-of-bounds in
> > stack_slabs"), init_stack_slab was changed to only use preallocated
> > memory for the next slab if the slab number limit is not reached.
> > However, setting next_slab_inited was not moved together with updating
> > stack_slabs.
> >
> > Set next_slab_inited only if the preallocated memory was used for the
> > next slab.
>
> Please provide a full description of the user-visible runtime effects
> of the bug (always always).
>
> I'll add the cc:stable (per your comments in the [0/N] cover letter),
> but it's more reliable to add it to the changelog yourself.

Right, will do this next time.

> As to when I upstream this: don't know - that depends on the
> user-visible-effects thing.

Looks like there's no bug to fix after all as per comments by Alexander.

Thanks!

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 06/18] lib/stackdepot: annotate init and early init functions
  2023-01-31 10:30   ` Alexander Potapenko
@ 2023-01-31 19:01     ` Andrey Konovalov
  0 siblings, 0 replies; 51+ messages in thread
From: Andrey Konovalov @ 2023-01-31 19:01 UTC (permalink / raw)
  To: Alexander Potapenko
  Cc: andrey.konovalov, Marco Elver, Vlastimil Babka, kasan-dev,
	Evgenii Stepanov, Andrew Morton, linux-mm, linux-kernel,
	Andrey Konovalov

On Tue, Jan 31, 2023 at 11:31 AM Alexander Potapenko <glider@google.com> wrote:
>
> On Mon, Jan 30, 2023 at 9:50 PM <andrey.konovalov@linux.dev> wrote:
> >
> > From: Andrey Konovalov <andreyknvl@google.com>
> >
> > Add comments to stack_depot_early_init and stack_depot_init to explain
> > certain parts of their implementation.
> >
> > Also add a pr_info message to stack_depot_early_init similar to the one
> > in stack_depot_init.
> >
> > Also move the scale variable in stack_depot_init to the scope where it
> > is being used.
> >
> > Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
> Reviewed-by: Alexander Potapenko <glider@google.com>
> ...
> >
> > +/* Allocates a hash table via kvmalloc. Can be used after boot. */
> Nit: kvcalloc? (Doesn't really matter much)

Ah, right, forgot to fix this. I initially wanted to point out that
early init allocates in memblock and late init in slab or vmalloc but
then decided it's an unnecessary level of details. Will fix in v2.
Thanks!

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 08/18] lib/stackdepot: reorder and annotate global variables
  2023-01-31 10:42   ` Alexander Potapenko
@ 2023-01-31 19:01     ` Andrey Konovalov
  0 siblings, 0 replies; 51+ messages in thread
From: Andrey Konovalov @ 2023-01-31 19:01 UTC (permalink / raw)
  To: Alexander Potapenko
  Cc: andrey.konovalov, Marco Elver, Vlastimil Babka, kasan-dev,
	Evgenii Stepanov, Andrew Morton, linux-mm, linux-kernel,
	Andrey Konovalov

On Tue, Jan 31, 2023 at 11:43 AM Alexander Potapenko <glider@google.com> wrote:
>
> On Mon, Jan 30, 2023 at 9:50 PM <andrey.konovalov@linux.dev> wrote:
> >
> > From: Andrey Konovalov <andreyknvl@google.com>
> >
> > Group stack depot global variables by their purpose:
> >
> > 1. Hash table-related variables,
> > 2. Slab-related variables,
> >
> > and add comments.
> >
> > Also clean up comments for hash table-related constants.
> >
> > Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
> Reviewed-by: Alexander Potapenko <glider@google.com>
>
> ...
> > +/* Lock that protects the variables above. */
> > +static DEFINE_RAW_SPINLOCK(depot_lock);
> > +/* Whether the next slab is initialized. */
> > +static int next_slab_inited;
> Might be worth clarifying what happens if there's no next slab (see my
> comment to patch 01).

Will do in v2. Thanks!

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 09/18] lib/stackdepot: rename hash table constants and variables
  2023-01-31 11:33   ` Alexander Potapenko
@ 2023-01-31 19:01     ` Andrey Konovalov
  2023-02-07 15:56       ` Alexander Potapenko
  0 siblings, 1 reply; 51+ messages in thread
From: Andrey Konovalov @ 2023-01-31 19:01 UTC (permalink / raw)
  To: Alexander Potapenko
  Cc: andrey.konovalov, Marco Elver, Vlastimil Babka, kasan-dev,
	Evgenii Stepanov, Andrew Morton, linux-mm, linux-kernel,
	Andrey Konovalov

On Tue, Jan 31, 2023 at 12:34 PM Alexander Potapenko <glider@google.com> wrote:
>
> On Mon, Jan 30, 2023 at 9:50 PM <andrey.konovalov@linux.dev> wrote:
> >
> > From: Andrey Konovalov <andreyknvl@google.com>
> >
> > Give more meaningful names to hash table-related constants and variables:
> >
> > 1. Rename STACK_HASH_SCALE to STACK_TABLE_SCALE to point out that it is
> >    related to scaling the hash table.
>
> It's only used twice, and in short lines, maybe make it
> STACK_HASH_TABLE_SCALE to point that out? :)

Sure, sounds good :)

> > 2. Rename STACK_HASH_ORDER_MIN/MAX to STACK_BUCKET_NUMBER_ORDER_MIN/MAX
> >    to point out that it is related to the number of hash table buckets.
>
> How about DEPOT_BUCKET_... or STACKDEPOT_BUCKET_...?
> (just bikeshedding, I don't have any strong preference).

This is what I had initially actually but then decided to keep the
prefix as STACK_ to match the stack_slabs and stack_table variables.

However, I can also rename those variables to depot_slabs and
depot_table. Do you think it makes sense?

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 11/18] lib/stackdepot: rename slab variables
  2023-01-31 11:59   ` Alexander Potapenko
@ 2023-01-31 19:05     ` Andrey Konovalov
  2023-02-01 12:38       ` Marco Elver
  0 siblings, 1 reply; 51+ messages in thread
From: Andrey Konovalov @ 2023-01-31 19:05 UTC (permalink / raw)
  To: Alexander Potapenko
  Cc: andrey.konovalov, Marco Elver, Vlastimil Babka, kasan-dev,
	Evgenii Stepanov, Andrew Morton, linux-mm, linux-kernel,
	Andrey Konovalov

On Tue, Jan 31, 2023 at 12:59 PM Alexander Potapenko <glider@google.com> wrote:
>
> On Mon, Jan 30, 2023 at 9:50 PM <andrey.konovalov@linux.dev> wrote:
> >
> > From: Andrey Konovalov <andreyknvl@google.com>
> >
> > Give better names to slab-related global variables: change "depot_"
> > prefix to "slab_" to point out that these variables are related to
> > stack depot slabs.
>
> I started asking myself if the word "slab" is applicable here at all.
> The concept of preallocating big chunks of memory to amortize the
> costs belongs to the original slab allocator, but "slab" has a special
> meaning in Linux, and we might be confusing people by using it in a
> different sense.
> What do you think?

Yes, I agree that using this word is a bit confusing.

Not sure what be a good alternative though. "Region", "block",
"collection", and "chunk" come to mind, but they don't reflect the
purpose/usage of these allocations as good as "slab". Although it's
possible that my perception as affected by overly frequently looking
at the slab allocator internals :)

Do you have a suggestion of a better word?

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 16/18] lib/stackdepot: annotate racy slab_index accesses
  2023-01-31 18:57     ` Andrey Konovalov
@ 2023-01-31 21:14       ` Andrew Morton
  0 siblings, 0 replies; 51+ messages in thread
From: Andrew Morton @ 2023-01-31 21:14 UTC (permalink / raw)
  To: Andrey Konovalov
  Cc: Marco Elver, andrey.konovalov, Alexander Potapenko,
	Vlastimil Babka, kasan-dev, Evgenii Stepanov, linux-mm,
	linux-kernel, Andrey Konovalov

On Tue, 31 Jan 2023 19:57:58 +0100 Andrey Konovalov <andreyknvl@gmail.com> wrote:

> On Tue, Jan 31, 2023 at 9:41 AM Marco Elver <elver@google.com> wrote:
> >
> > > diff --git a/lib/stackdepot.c b/lib/stackdepot.c
> > > index f291ad6a4e72..cc2fe8563af4 100644
> > > --- a/lib/stackdepot.c
> > > +++ b/lib/stackdepot.c
> > > @@ -269,8 +269,11 @@ depot_alloc_stack(unsigned long *entries, int size, u32 hash, void **prealloc)
> > >                         return NULL;
> > >                 }
> > >
> > > -               /* Move on to the next slab. */
> > > -               slab_index++;
> > > +               /*
> > > +                * Move on to the next slab.
> > > +                * WRITE_ONCE annotates a race with stack_depot_fetch.
> >
> > "Pairs with potential concurrent read in stack_depot_fetch()." would be clearer.
> >
> > I wouldn't say WRITE_ONCE annotates a race (race = involves 2+
> > accesses, but here's just 1), it just marks this access here which
> > itself is paired with the potential racing read in the other function.
> 
> Will do in v2. Thanks!

Please let's not redo an 18-patch series for a single line comment
change.  If there are more substantial changes then OK.

I queued this as a to-be-squashed fixup against "/stackdepot: annotate
racy slab_index accesses":


From: Andrew Morton <akpm@linux-foundation.org>
Subject: lib-stackdepot-annotate-racy-slab_index-accesses-fix
Date: Tue Jan 31 01:10:50 PM PST 2023

enhance comment, per Marco

Cc: Alexander Potapenko <glider@google.com>
Cc: Andrey Konovalov <andreyknvl@google.com>
Cc: Evgenii Stepanov <eugenis@google.com>
Cc: Marco Elver <elver@google.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 lib/stackdepot.c |    3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

--- a/lib/stackdepot.c~lib-stackdepot-annotate-racy-slab_index-accesses-fix
+++ a/lib/stackdepot.c
@@ -271,7 +271,8 @@ depot_alloc_stack(unsigned long *entries
 
 		/*
 		 * Move on to the next slab.
-		 * WRITE_ONCE annotates a race with stack_depot_fetch.
+		 * WRITE_ONCE pairs with potential concurrent read in
+		 * stack_depot_fetch().
 		 */
 		WRITE_ONCE(slab_index, slab_index + 1);
 		slab_offset = 0;
_



^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 01/18] lib/stackdepot: fix setting next_slab_inited in init_stack_slab
  2023-01-31 18:59     ` Andrey Konovalov
@ 2023-02-01 11:51       ` Alexander Potapenko
  0 siblings, 0 replies; 51+ messages in thread
From: Alexander Potapenko @ 2023-02-01 11:51 UTC (permalink / raw)
  To: Andrey Konovalov
  Cc: andrey.konovalov, Marco Elver, Vlastimil Babka, kasan-dev,
	Evgenii Stepanov, Andrew Morton, linux-mm, linux-kernel,
	Andrey Konovalov

On Tue, Jan 31, 2023 at 8:00 PM Andrey Konovalov <andreyknvl@gmail.com> wrote:
>
> On Tue, Jan 31, 2023 at 10:30 AM Alexander Potapenko <glider@google.com> wrote:
> >
> > Wait, I think there's a problem here.
> >
> > > diff --git a/lib/stackdepot.c b/lib/stackdepot.c
> > > index 79e894cf8406..0eed9bbcf23e 100644
> > > --- a/lib/stackdepot.c
> > > +++ b/lib/stackdepot.c
> > > @@ -105,12 +105,13 @@ static bool init_stack_slab(void **prealloc)
> > >                 if (depot_index + 1 < STACK_ALLOC_MAX_SLABS) {
> > If we get to this branch, but the condition is false, this means that:
> >  - next_slab_inited == 0
> >  - depot_index == STACK_ALLOC_MAX_SLABS+1
> >  - stack_slabs[depot_index] != NULL.
> >
> > So stack_slabs[] is at full capacity, but upon leaving
> > init_stack_slab() we'll always keep next_slab_inited==0.
> >
> > Now every time __stack_depot_save() is called for a known stack trace,
> > it will preallocate 1<<STACK_ALLOC_ORDER pages (because
> > next_slab_inited==0), then find the stack trace id in the hash, then
> > pass the preallocated pages to init_stack_slab(), which will not
> > change the value of next_slab_inited.
> > Then the preallocated pages will be freed, and next time
> > __stack_depot_save() is called they'll be allocated again.
>
> Ah, right, missed that.
>
> What do you think about renaming next_slab_inited to
> next_slab_required and inverting the used values (0/1 -> 1/0)? This
> would make this part of code less confusing.

"Required" as in "requires a preallocated buffer, but does not have one yet"?
Yes, that's probably better.
(In any case we'll need to add a comment to that variable explaining
the circumstances under which one or another value is possible).

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 11/18] lib/stackdepot: rename slab variables
  2023-01-31 19:05     ` Andrey Konovalov
@ 2023-02-01 12:38       ` Marco Elver
  2023-02-08 16:43         ` Vlastimil Babka
  0 siblings, 1 reply; 51+ messages in thread
From: Marco Elver @ 2023-02-01 12:38 UTC (permalink / raw)
  To: Andrey Konovalov
  Cc: Alexander Potapenko, andrey.konovalov, Vlastimil Babka,
	kasan-dev, Evgenii Stepanov, Andrew Morton, linux-mm,
	linux-kernel, Andrey Konovalov

On Tue, 31 Jan 2023 at 20:06, Andrey Konovalov <andreyknvl@gmail.com> wrote:
>
> On Tue, Jan 31, 2023 at 12:59 PM Alexander Potapenko <glider@google.com> wrote:
> >
> > On Mon, Jan 30, 2023 at 9:50 PM <andrey.konovalov@linux.dev> wrote:
> > >
> > > From: Andrey Konovalov <andreyknvl@google.com>
> > >
> > > Give better names to slab-related global variables: change "depot_"
> > > prefix to "slab_" to point out that these variables are related to
> > > stack depot slabs.
> >
> > I started asking myself if the word "slab" is applicable here at all.
> > The concept of preallocating big chunks of memory to amortize the
> > costs belongs to the original slab allocator, but "slab" has a special
> > meaning in Linux, and we might be confusing people by using it in a
> > different sense.
> > What do you think?
>
> Yes, I agree that using this word is a bit confusing.
>
> Not sure what be a good alternative though. "Region", "block",
> "collection", and "chunk" come to mind, but they don't reflect the
> purpose/usage of these allocations as good as "slab". Although it's
> possible that my perception as affected by overly frequently looking
> at the slab allocator internals :)
>
> Do you have a suggestion of a better word?

I'd vote for "pool" and "chunk(s)" (within that pool).

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 15/18] lib/stacktrace, kasan, kmsan: rework extra_bits interface
  2023-01-30 20:49 ` [PATCH 15/18] lib/stacktrace, kasan, kmsan: rework extra_bits interface andrey.konovalov
  2023-01-31  8:53   ` Marco Elver
@ 2023-02-02 10:03   ` Alexander Potapenko
  1 sibling, 0 replies; 51+ messages in thread
From: Alexander Potapenko @ 2023-02-02 10:03 UTC (permalink / raw)
  To: andrey.konovalov
  Cc: Marco Elver, Andrey Konovalov, Vlastimil Babka, kasan-dev,
	Evgenii Stepanov, Andrew Morton, linux-mm, linux-kernel,
	Andrey Konovalov

> This change also fixes a minor issue in the old code: __stack_depot_save
> does not return NULL if saving stack trace fails and extra_bits is used.

Good catch!


> + *
> + * Stack depot handles have a few unused bits, which can be used for storing
> + * user-specific information. These bits are transparent to the stack depot.
> + */
> +depot_stack_handle_t stack_depot_set_extra_bits(depot_stack_handle_t handle,
> +                                               unsigned int extra_bits)
> +{
> +       union handle_parts parts = { .handle = handle };
> +
> +       parts.extra = extra_bits;
> +       return parts.handle;
> +}
> +EXPORT_SYMBOL(stack_depot_set_extra_bits);

You'd need to check for handle==NULL here, otherwise we're in the same
situation when __stack_depot_save returns NULL and we are happily
applying extra bits on top of it.

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 15/18] lib/stacktrace, kasan, kmsan: rework extra_bits interface
  2023-01-31 18:58     ` Andrey Konovalov
@ 2023-02-02 10:04       ` Alexander Potapenko
  0 siblings, 0 replies; 51+ messages in thread
From: Alexander Potapenko @ 2023-02-02 10:04 UTC (permalink / raw)
  To: Andrey Konovalov
  Cc: Marco Elver, andrey.konovalov, Vlastimil Babka, kasan-dev,
	Evgenii Stepanov, Andrew Morton, linux-mm, linux-kernel,
	Andrey Konovalov

On Tue, Jan 31, 2023 at 7:58 PM Andrey Konovalov <andreyknvl@gmail.com> wrote:
>
> On Tue, Jan 31, 2023 at 9:54 AM Marco Elver <elver@google.com> wrote:
> >
> > > +depot_stack_handle_t stack_depot_set_extra_bits(depot_stack_handle_t handle,
> > > +                                               unsigned int extra_bits);
> >
> > Can you add __must_check to this function? Either that or making
> > handle an in/out param, as otherwise it might be easy to think that it
> > doesn't return anything ("set_foo()" seems like it sets the
> > information in the handle-associated data but not handle itself ... in
> > case someone missed the documentation).
>
> Makes sense, will do in v2 if Alexander doesn't object to the
> interface change. Thanks!

I do not object. Thanks for doing this!

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 09/18] lib/stackdepot: rename hash table constants and variables
  2023-01-31 19:01     ` Andrey Konovalov
@ 2023-02-07 15:56       ` Alexander Potapenko
  0 siblings, 0 replies; 51+ messages in thread
From: Alexander Potapenko @ 2023-02-07 15:56 UTC (permalink / raw)
  To: Andrey Konovalov
  Cc: andrey.konovalov, Marco Elver, Vlastimil Babka, kasan-dev,
	Evgenii Stepanov, Andrew Morton, linux-mm, linux-kernel,
	Andrey Konovalov

On Tue, Jan 31, 2023 at 8:02 PM Andrey Konovalov <andreyknvl@gmail.com> wrote:
>
> On Tue, Jan 31, 2023 at 12:34 PM Alexander Potapenko <glider@google.com> wrote:
> >
> > On Mon, Jan 30, 2023 at 9:50 PM <andrey.konovalov@linux.dev> wrote:
> > >
> > > From: Andrey Konovalov <andreyknvl@google.com>
> > >
> > > Give more meaningful names to hash table-related constants and variables:
> > >
> > > 1. Rename STACK_HASH_SCALE to STACK_TABLE_SCALE to point out that it is
> > >    related to scaling the hash table.
> >
> > It's only used twice, and in short lines, maybe make it
> > STACK_HASH_TABLE_SCALE to point that out? :)
>
> Sure, sounds good :)
>
> > > 2. Rename STACK_HASH_ORDER_MIN/MAX to STACK_BUCKET_NUMBER_ORDER_MIN/MAX
> > >    to point out that it is related to the number of hash table buckets.
> >
> > How about DEPOT_BUCKET_... or STACKDEPOT_BUCKET_...?
> > (just bikeshedding, I don't have any strong preference).
>
> This is what I had initially actually but then decided to keep the
> prefix as STACK_ to match the stack_slabs and stack_table variables.

Ok, let's keep your version then.
Thanks!

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 04/18] lib/stackdepot, mm: rename stack_depot_want_early_init
  2023-01-30 20:49 ` [PATCH 04/18] lib/stackdepot, mm: rename stack_depot_want_early_init andrey.konovalov
  2023-01-31 10:26   ` Alexander Potapenko
@ 2023-02-08 16:40   ` Vlastimil Babka
  1 sibling, 0 replies; 51+ messages in thread
From: Vlastimil Babka @ 2023-02-08 16:40 UTC (permalink / raw)
  To: andrey.konovalov, Marco Elver, Alexander Potapenko
  Cc: Andrey Konovalov, kasan-dev, Evgenii Stepanov, Andrew Morton,
	linux-mm, linux-kernel, Andrey Konovalov

On 1/30/23 21:49, andrey.konovalov@linux.dev wrote:
> From: Andrey Konovalov <andreyknvl@google.com>
> 
> Rename stack_depot_want_early_init to stack_depot_request_early_init.
> 
> The old name is confusing, as it hints at returning some kind of intention
> of stack depot. The new name reflects that this function requests an action
> from stack depot instead.
> 
> No functional changes.
> 
> Signed-off-by: Andrey Konovalov <andreyknvl@google.com>

Acked-by: Vlastimil Babka <vbabka@suse.cz>

> ---
>  include/linux/stackdepot.h | 14 +++++++-------
>  lib/stackdepot.c           | 10 +++++-----
>  mm/page_owner.c            |  2 +-
>  mm/slub.c                  |  4 ++--
>  4 files changed, 15 insertions(+), 15 deletions(-)
> 
> diff --git a/include/linux/stackdepot.h b/include/linux/stackdepot.h
> index 1296a6eeaec0..c4e3abc16b16 100644
> --- a/include/linux/stackdepot.h
> +++ b/include/linux/stackdepot.h
> @@ -31,26 +31,26 @@ typedef u32 depot_stack_handle_t;
>   * enabled as part of mm_init(), for subsystems where it's known at compile time
>   * that stack depot will be used.
>   *
> - * Another alternative is to call stack_depot_want_early_init(), when the
> + * Another alternative is to call stack_depot_request_early_init(), when the
>   * decision to use stack depot is taken e.g. when evaluating kernel boot
>   * parameters, which precedes the enablement point in mm_init().
>   *
> - * stack_depot_init() and stack_depot_want_early_init() can be called regardless
> - * of CONFIG_STACKDEPOT and are no-op when disabled. The actual save/fetch/print
> - * functions should only be called from code that makes sure CONFIG_STACKDEPOT
> - * is enabled.
> + * stack_depot_init() and stack_depot_request_early_init() can be called
> + * regardless of CONFIG_STACKDEPOT and are no-op when disabled. The actual
> + * save/fetch/print functions should only be called from code that makes sure
> + * CONFIG_STACKDEPOT is enabled.
>   */
>  #ifdef CONFIG_STACKDEPOT
>  int stack_depot_init(void);
>  
> -void __init stack_depot_want_early_init(void);
> +void __init stack_depot_request_early_init(void);
>  
>  /* This is supposed to be called only from mm_init() */
>  int __init stack_depot_early_init(void);
>  #else
>  static inline int stack_depot_init(void) { return 0; }
>  
> -static inline void stack_depot_want_early_init(void) { }
> +static inline void stack_depot_request_early_init(void) { }
>  
>  static inline int stack_depot_early_init(void)	{ return 0; }
>  #endif
> diff --git a/lib/stackdepot.c b/lib/stackdepot.c
> index 90c4dd48d75e..8743fad1485f 100644
> --- a/lib/stackdepot.c
> +++ b/lib/stackdepot.c
> @@ -71,7 +71,7 @@ struct stack_record {
>  	unsigned long entries[];	/* Variable-sized array of entries. */
>  };
>  
> -static bool __stack_depot_want_early_init __initdata = IS_ENABLED(CONFIG_STACKDEPOT_ALWAYS_INIT);
> +static bool __stack_depot_early_init_requested __initdata = IS_ENABLED(CONFIG_STACKDEPOT_ALWAYS_INIT);
>  static bool __stack_depot_early_init_passed __initdata;
>  
>  static void *stack_slabs[STACK_ALLOC_MAX_SLABS];
> @@ -107,12 +107,12 @@ static int __init is_stack_depot_disabled(char *str)
>  }
>  early_param("stack_depot_disable", is_stack_depot_disabled);
>  
> -void __init stack_depot_want_early_init(void)
> +void __init stack_depot_request_early_init(void)
>  {
> -	/* Too late to request early init now */
> +	/* Too late to request early init now. */
>  	WARN_ON(__stack_depot_early_init_passed);
>  
> -	__stack_depot_want_early_init = true;
> +	__stack_depot_early_init_requested = true;
>  }
>  
>  int __init stack_depot_early_init(void)
> @@ -128,7 +128,7 @@ int __init stack_depot_early_init(void)
>  	if (kasan_enabled() && !stack_hash_order)
>  		stack_hash_order = STACK_HASH_ORDER_MAX;
>  
> -	if (!__stack_depot_want_early_init || stack_depot_disable)
> +	if (!__stack_depot_early_init_requested || stack_depot_disable)
>  		return 0;
>  
>  	if (stack_hash_order)
> diff --git a/mm/page_owner.c b/mm/page_owner.c
> index 2d27f532df4c..90a4a087e6c7 100644
> --- a/mm/page_owner.c
> +++ b/mm/page_owner.c
> @@ -48,7 +48,7 @@ static int __init early_page_owner_param(char *buf)
>  	int ret = kstrtobool(buf, &page_owner_enabled);
>  
>  	if (page_owner_enabled)
> -		stack_depot_want_early_init();
> +		stack_depot_request_early_init();
>  
>  	return ret;
>  }
> diff --git a/mm/slub.c b/mm/slub.c
> index 13459c69095a..f2c6c356bc36 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -1592,7 +1592,7 @@ static int __init setup_slub_debug(char *str)
>  		} else {
>  			slab_list_specified = true;
>  			if (flags & SLAB_STORE_USER)
> -				stack_depot_want_early_init();
> +				stack_depot_request_early_init();
>  		}
>  	}
>  
> @@ -1611,7 +1611,7 @@ static int __init setup_slub_debug(char *str)
>  out:
>  	slub_debug = global_flags;
>  	if (slub_debug & SLAB_STORE_USER)
> -		stack_depot_want_early_init();
> +		stack_depot_request_early_init();
>  	if (slub_debug != 0 || slub_debug_string)
>  		static_branch_enable(&slub_debug_enabled);
>  	else


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 11/18] lib/stackdepot: rename slab variables
  2023-02-01 12:38       ` Marco Elver
@ 2023-02-08 16:43         ` Vlastimil Babka
  0 siblings, 0 replies; 51+ messages in thread
From: Vlastimil Babka @ 2023-02-08 16:43 UTC (permalink / raw)
  To: Marco Elver, Andrey Konovalov
  Cc: Alexander Potapenko, andrey.konovalov, kasan-dev,
	Evgenii Stepanov, Andrew Morton, linux-mm, linux-kernel,
	Andrey Konovalov

On 2/1/23 13:38, Marco Elver wrote:
> On Tue, 31 Jan 2023 at 20:06, Andrey Konovalov <andreyknvl@gmail.com> wrote:
>>
>> On Tue, Jan 31, 2023 at 12:59 PM Alexander Potapenko <glider@google.com> wrote:
>> >
>> > On Mon, Jan 30, 2023 at 9:50 PM <andrey.konovalov@linux.dev> wrote:
>> > >
>> > > From: Andrey Konovalov <andreyknvl@google.com>
>> > >
>> > > Give better names to slab-related global variables: change "depot_"
>> > > prefix to "slab_" to point out that these variables are related to
>> > > stack depot slabs.
>> >
>> > I started asking myself if the word "slab" is applicable here at all.
>> > The concept of preallocating big chunks of memory to amortize the
>> > costs belongs to the original slab allocator, but "slab" has a special
>> > meaning in Linux, and we might be confusing people by using it in a
>> > different sense.
>> > What do you think?
>>
>> Yes, I agree that using this word is a bit confusing.
>>
>> Not sure what be a good alternative though. "Region", "block",
>> "collection", and "chunk" come to mind, but they don't reflect the
>> purpose/usage of these allocations as good as "slab". Although it's
>> possible that my perception as affected by overly frequently looking
>> at the slab allocator internals :)
>>
>> Do you have a suggestion of a better word?
> 
> I'd vote for "pool" and "chunk(s)" (within that pool).

+1, also wasn't happy that "slab" is being used out of the usual context here :)

Thanks

^ permalink raw reply	[flat|nested] 51+ messages in thread

end of thread, other threads:[~2023-02-08 16:44 UTC | newest]

Thread overview: 51+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-01-30 20:49 [PATCH 00/18] lib/stackdepot: fixes and clean-ups andrey.konovalov
2023-01-30 20:49 ` [PATCH 01/18] lib/stackdepot: fix setting next_slab_inited in init_stack_slab andrey.konovalov
2023-01-31  0:18   ` Andrew Morton
2023-01-31 19:00     ` Andrey Konovalov
2023-01-31  9:07   ` Alexander Potapenko
2023-01-31  9:29   ` Alexander Potapenko
2023-01-31 18:59     ` Andrey Konovalov
2023-02-01 11:51       ` Alexander Potapenko
2023-01-30 20:49 ` [PATCH 02/18] lib/stackdepot: put functions in logical order andrey.konovalov
2023-01-31 10:20   ` Alexander Potapenko
2023-01-30 20:49 ` [PATCH 03/18] lib/stackdepot: use pr_fmt to define message format andrey.konovalov
2023-01-31 10:24   ` Alexander Potapenko
2023-01-30 20:49 ` [PATCH 04/18] lib/stackdepot, mm: rename stack_depot_want_early_init andrey.konovalov
2023-01-31 10:26   ` Alexander Potapenko
2023-02-08 16:40   ` Vlastimil Babka
2023-01-30 20:49 ` [PATCH 05/18] lib/stackdepot: rename stack_depot_disable andrey.konovalov
2023-01-31 10:28   ` Alexander Potapenko
2023-01-30 20:49 ` [PATCH 06/18] lib/stackdepot: annotate init and early init functions andrey.konovalov
2023-01-31 10:30   ` Alexander Potapenko
2023-01-31 19:01     ` Andrey Konovalov
2023-01-30 20:49 ` [PATCH 07/18] lib/stackdepot: lower the indentation in stack_depot_init andrey.konovalov
2023-01-31 10:37   ` Alexander Potapenko
2023-01-30 20:49 ` [PATCH 08/18] lib/stackdepot: reorder and annotate global variables andrey.konovalov
2023-01-31 10:42   ` Alexander Potapenko
2023-01-31 19:01     ` Andrey Konovalov
2023-01-30 20:49 ` [PATCH 09/18] lib/stackdepot: rename hash table constants and variables andrey.konovalov
2023-01-31 11:33   ` Alexander Potapenko
2023-01-31 19:01     ` Andrey Konovalov
2023-02-07 15:56       ` Alexander Potapenko
2023-01-30 20:49 ` [PATCH 10/18] lib/stackdepot: rename init_stack_slab andrey.konovalov
2023-01-31 11:34   ` Alexander Potapenko
2023-01-30 20:49 ` [PATCH 11/18] lib/stackdepot: rename slab variables andrey.konovalov
2023-01-31 11:59   ` Alexander Potapenko
2023-01-31 19:05     ` Andrey Konovalov
2023-02-01 12:38       ` Marco Elver
2023-02-08 16:43         ` Vlastimil Babka
2023-01-30 20:49 ` [PATCH 12/18] lib/stackdepot: rename handle and slab constants andrey.konovalov
2023-01-31 12:11   ` Alexander Potapenko
2023-01-30 20:49 ` [PATCH 13/18] lib/stacktrace: drop impossible WARN_ON for depot_init_slab andrey.konovalov
2023-01-30 20:49 ` [PATCH 14/18] lib/stackdepot: annotate depot_init_slab and depot_alloc_stack andrey.konovalov
2023-01-30 20:49 ` [PATCH 15/18] lib/stacktrace, kasan, kmsan: rework extra_bits interface andrey.konovalov
2023-01-31  8:53   ` Marco Elver
2023-01-31 18:58     ` Andrey Konovalov
2023-02-02 10:04       ` Alexander Potapenko
2023-02-02 10:03   ` Alexander Potapenko
2023-01-30 20:49 ` [PATCH 16/18] lib/stackdepot: annotate racy slab_index accesses andrey.konovalov
2023-01-31  8:40   ` Marco Elver
2023-01-31 18:57     ` Andrey Konovalov
2023-01-31 21:14       ` Andrew Morton
2023-01-30 20:49 ` [PATCH 17/18] lib/stackdepot: various comments clean-ups andrey.konovalov
2023-01-30 20:49 ` [PATCH 18/18] lib/stackdepot: move documentation comments to stackdepot.h andrey.konovalov

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.