linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2] kdb: Get rid of custom debug heap allocator
@ 2021-03-23  6:55 Sumit Garg
  2021-06-17 11:25 ` Daniel Thompson
  0 siblings, 1 reply; 10+ messages in thread
From: Sumit Garg @ 2021-03-23  6:55 UTC (permalink / raw)
  To: kgdb-bugreport
  Cc: daniel.thompson, jason.wessel, dianders, linux-kernel, Sumit Garg

Currently the only user for debug heap is kdbnearsym() which can be
modified to rather use statically allocated buffer for symbol name as
per it's current usage. So do that and hence remove custom debug heap
allocator.

Note that this change puts a restriction on kdbnearsym() callers to
carefully use shared namebuf such that a caller should consume the symbol
returned immediately prior to another call to fetch a different symbol.

This change has been tested using kgdbtest on arm64 which doesn't show
any regressions.

Suggested-by: Daniel Thompson <daniel.thompson@linaro.org>
Signed-off-by: Sumit Garg <sumit.garg@linaro.org>
---

Changes in v2:
- Use single static buffer for symbol name in kdbnearsym() instead of
  per caller buffers allocated on stack.

 kernel/debug/kdb/kdb_debugger.c |   1 -
 kernel/debug/kdb/kdb_private.h  |   5 -
 kernel/debug/kdb/kdb_support.c  | 318 ++------------------------------
 3 files changed, 15 insertions(+), 309 deletions(-)

diff --git a/kernel/debug/kdb/kdb_debugger.c b/kernel/debug/kdb/kdb_debugger.c
index 0220afda3200..e91fc3e4edd5 100644
--- a/kernel/debug/kdb/kdb_debugger.c
+++ b/kernel/debug/kdb/kdb_debugger.c
@@ -140,7 +140,6 @@ int kdb_stub(struct kgdb_state *ks)
 	 */
 	kdb_common_deinit_state();
 	KDB_STATE_CLEAR(PAGER);
-	kdbnearsym_cleanup();
 	if (error == KDB_CMD_KGDB) {
 		if (KDB_STATE(DOING_KGDB))
 			KDB_STATE_CLEAR(DOING_KGDB);
diff --git a/kernel/debug/kdb/kdb_private.h b/kernel/debug/kdb/kdb_private.h
index b857a84de3b5..ec91d7e02334 100644
--- a/kernel/debug/kdb/kdb_private.h
+++ b/kernel/debug/kdb/kdb_private.h
@@ -109,7 +109,6 @@ extern int kdbgetaddrarg(int, const char **, int*, unsigned long *,
 			 long *, char **);
 extern int kdbgetsymval(const char *, kdb_symtab_t *);
 extern int kdbnearsym(unsigned long, kdb_symtab_t *);
-extern void kdbnearsym_cleanup(void);
 extern char *kdb_strdup(const char *str, gfp_t type);
 extern void kdb_symbol_print(unsigned long, const kdb_symtab_t *, unsigned int);
 
@@ -233,10 +232,6 @@ extern struct task_struct *kdb_curr_task(int);
 
 #define GFP_KDB (in_dbg_master() ? GFP_ATOMIC : GFP_KERNEL)
 
-extern void *debug_kmalloc(size_t size, gfp_t flags);
-extern void debug_kfree(void *);
-extern void debug_kusage(void);
-
 extern struct task_struct *kdb_current_task;
 extern struct pt_regs *kdb_current_regs;
 
diff --git a/kernel/debug/kdb/kdb_support.c b/kernel/debug/kdb/kdb_support.c
index b59aad1f0b55..e131d74abb8d 100644
--- a/kernel/debug/kdb/kdb_support.c
+++ b/kernel/debug/kdb/kdb_support.c
@@ -57,35 +57,26 @@ int kdbgetsymval(const char *symname, kdb_symtab_t *symtab)
 }
 EXPORT_SYMBOL(kdbgetsymval);
 
-static char *kdb_name_table[100];	/* arbitrary size */
-
 /*
- * kdbnearsym -	Return the name of the symbol with the nearest address
- *	less than 'addr'.
+ * kdbnearsym() - Return the name of the symbol with the nearest address
+ *                less than @addr.
+ * @addr: Address to check for near symbol
+ * @symtab: Structure to receive results
  *
- * Parameters:
- *	addr	Address to check for symbol near
- *	symtab  Structure to receive results
- * Returns:
- *	0	No sections contain this address, symtab zero filled
- *	1	Address mapped to module/symbol/section, data in symtab
- * Remarks:
- *	2.6 kallsyms has a "feature" where it unpacks the name into a
- *	string.  If that string is reused before the caller expects it
- *	then the caller sees its string change without warning.  To
- *	avoid cluttering up the main kdb code with lots of kdb_strdup,
- *	tests and kfree calls, kdbnearsym maintains an LRU list of the
- *	last few unique strings.  The list is sized large enough to
- *	hold active strings, no kdb caller of kdbnearsym makes more
- *	than ~20 later calls before using a saved value.
+ * Note here that only single statically allocated namebuf is used for every
+ * symbol, so the caller should consume it immediately prior to another call
+ * to fetch a different symbol.
+ *
+ * Return:
+ * * 0 - No sections contain this address, symtab zero filled
+ * * 1 - Address mapped to module/symbol/section, data in symtab
  */
 int kdbnearsym(unsigned long addr, kdb_symtab_t *symtab)
 {
 	int ret = 0;
 	unsigned long symbolsize = 0;
 	unsigned long offset = 0;
-#define knt1_size 128		/* must be >= kallsyms table size */
-	char *knt1 = NULL;
+	static char namebuf[KSYM_NAME_LEN];
 
 	if (KDB_DEBUG(AR))
 		kdb_printf("kdbnearsym: addr=0x%lx, symtab=%px\n", addr, symtab);
@@ -93,14 +84,9 @@ int kdbnearsym(unsigned long addr, kdb_symtab_t *symtab)
 
 	if (addr < 4096)
 		goto out;
-	knt1 = debug_kmalloc(knt1_size, GFP_ATOMIC);
-	if (!knt1) {
-		kdb_printf("kdbnearsym: addr=0x%lx cannot kmalloc knt1\n",
-			   addr);
-		goto out;
-	}
+
 	symtab->sym_name = kallsyms_lookup(addr, &symbolsize , &offset,
-				(char **)(&symtab->mod_name), knt1);
+				(char **)(&symtab->mod_name), namebuf);
 	if (offset > 8*1024*1024) {
 		symtab->sym_name = NULL;
 		addr = offset = symbolsize = 0;
@@ -109,42 +95,6 @@ int kdbnearsym(unsigned long addr, kdb_symtab_t *symtab)
 	symtab->sym_end = symtab->sym_start + symbolsize;
 	ret = symtab->sym_name != NULL && *(symtab->sym_name) != '\0';
 
-	if (ret) {
-		int i;
-		/* Another 2.6 kallsyms "feature".  Sometimes the sym_name is
-		 * set but the buffer passed into kallsyms_lookup is not used,
-		 * so it contains garbage.  The caller has to work out which
-		 * buffer needs to be saved.
-		 *
-		 * What was Rusty smoking when he wrote that code?
-		 */
-		if (symtab->sym_name != knt1) {
-			strncpy(knt1, symtab->sym_name, knt1_size);
-			knt1[knt1_size-1] = '\0';
-		}
-		for (i = 0; i < ARRAY_SIZE(kdb_name_table); ++i) {
-			if (kdb_name_table[i] &&
-			    strcmp(kdb_name_table[i], knt1) == 0)
-				break;
-		}
-		if (i >= ARRAY_SIZE(kdb_name_table)) {
-			debug_kfree(kdb_name_table[0]);
-			memmove(kdb_name_table, kdb_name_table+1,
-			       sizeof(kdb_name_table[0]) *
-			       (ARRAY_SIZE(kdb_name_table)-1));
-		} else {
-			debug_kfree(knt1);
-			knt1 = kdb_name_table[i];
-			memmove(kdb_name_table+i, kdb_name_table+i+1,
-			       sizeof(kdb_name_table[0]) *
-			       (ARRAY_SIZE(kdb_name_table)-i-1));
-		}
-		i = ARRAY_SIZE(kdb_name_table) - 1;
-		kdb_name_table[i] = knt1;
-		symtab->sym_name = kdb_name_table[i];
-		knt1 = NULL;
-	}
-
 	if (symtab->mod_name == NULL)
 		symtab->mod_name = "kernel";
 	if (KDB_DEBUG(AR))
@@ -152,23 +102,10 @@ int kdbnearsym(unsigned long addr, kdb_symtab_t *symtab)
 		   "symtab->mod_name=%px, symtab->sym_name=%px (%s)\n", ret,
 		   symtab->sym_start, symtab->mod_name, symtab->sym_name,
 		   symtab->sym_name);
-
 out:
-	debug_kfree(knt1);
 	return ret;
 }
 
-void kdbnearsym_cleanup(void)
-{
-	int i;
-	for (i = 0; i < ARRAY_SIZE(kdb_name_table); ++i) {
-		if (kdb_name_table[i]) {
-			debug_kfree(kdb_name_table[i]);
-			kdb_name_table[i] = NULL;
-		}
-	}
-}
-
 static char ks_namebuf[KSYM_NAME_LEN+1], ks_namebuf_prev[KSYM_NAME_LEN+1];
 
 /*
@@ -259,6 +196,7 @@ void kdb_symbol_print(unsigned long addr, const kdb_symtab_t *symtab_p,
 		      unsigned int punc)
 {
 	kdb_symtab_t symtab, *symtab_p2;
+
 	if (symtab_p) {
 		symtab_p2 = (kdb_symtab_t *)symtab_p;
 	} else {
@@ -665,232 +603,6 @@ unsigned long kdb_task_state(const struct task_struct *p, unsigned long mask)
 	return (mask & kdb_task_state_string(state)) != 0;
 }
 
-/* Last ditch allocator for debugging, so we can still debug even when
- * the GFP_ATOMIC pool has been exhausted.  The algorithms are tuned
- * for space usage, not for speed.  One smallish memory pool, the free
- * chain is always in ascending address order to allow coalescing,
- * allocations are done in brute force best fit.
- */
-
-struct debug_alloc_header {
-	u32 next;	/* offset of next header from start of pool */
-	u32 size;
-	void *caller;
-};
-
-/* The memory returned by this allocator must be aligned, which means
- * so must the header size.  Do not assume that sizeof(struct
- * debug_alloc_header) is a multiple of the alignment, explicitly
- * calculate the overhead of this header, including the alignment.
- * The rest of this code must not use sizeof() on any header or
- * pointer to a header.
- */
-#define dah_align 8
-#define dah_overhead ALIGN(sizeof(struct debug_alloc_header), dah_align)
-
-static u64 debug_alloc_pool_aligned[256*1024/dah_align];	/* 256K pool */
-static char *debug_alloc_pool = (char *)debug_alloc_pool_aligned;
-static u32 dah_first, dah_first_call = 1, dah_used, dah_used_max;
-
-/* Locking is awkward.  The debug code is called from all contexts,
- * including non maskable interrupts.  A normal spinlock is not safe
- * in NMI context.  Try to get the debug allocator lock, if it cannot
- * be obtained after a second then give up.  If the lock could not be
- * previously obtained on this cpu then only try once.
- *
- * sparse has no annotation for "this function _sometimes_ acquires a
- * lock", so fudge the acquire/release notation.
- */
-static DEFINE_SPINLOCK(dap_lock);
-static int get_dap_lock(void)
-	__acquires(dap_lock)
-{
-	static int dap_locked = -1;
-	int count;
-	if (dap_locked == smp_processor_id())
-		count = 1;
-	else
-		count = 1000;
-	while (1) {
-		if (spin_trylock(&dap_lock)) {
-			dap_locked = -1;
-			return 1;
-		}
-		if (!count--)
-			break;
-		udelay(1000);
-	}
-	dap_locked = smp_processor_id();
-	__acquire(dap_lock);
-	return 0;
-}
-
-void *debug_kmalloc(size_t size, gfp_t flags)
-{
-	unsigned int rem, h_offset;
-	struct debug_alloc_header *best, *bestprev, *prev, *h;
-	void *p = NULL;
-	if (!get_dap_lock()) {
-		__release(dap_lock);	/* we never actually got it */
-		return NULL;
-	}
-	h = (struct debug_alloc_header *)(debug_alloc_pool + dah_first);
-	if (dah_first_call) {
-		h->size = sizeof(debug_alloc_pool_aligned) - dah_overhead;
-		dah_first_call = 0;
-	}
-	size = ALIGN(size, dah_align);
-	prev = best = bestprev = NULL;
-	while (1) {
-		if (h->size >= size && (!best || h->size < best->size)) {
-			best = h;
-			bestprev = prev;
-			if (h->size == size)
-				break;
-		}
-		if (!h->next)
-			break;
-		prev = h;
-		h = (struct debug_alloc_header *)(debug_alloc_pool + h->next);
-	}
-	if (!best)
-		goto out;
-	rem = best->size - size;
-	/* The pool must always contain at least one header */
-	if (best->next == 0 && bestprev == NULL && rem < dah_overhead)
-		goto out;
-	if (rem >= dah_overhead) {
-		best->size = size;
-		h_offset = ((char *)best - debug_alloc_pool) +
-			   dah_overhead + best->size;
-		h = (struct debug_alloc_header *)(debug_alloc_pool + h_offset);
-		h->size = rem - dah_overhead;
-		h->next = best->next;
-	} else
-		h_offset = best->next;
-	best->caller = __builtin_return_address(0);
-	dah_used += best->size;
-	dah_used_max = max(dah_used, dah_used_max);
-	if (bestprev)
-		bestprev->next = h_offset;
-	else
-		dah_first = h_offset;
-	p = (char *)best + dah_overhead;
-	memset(p, POISON_INUSE, best->size - 1);
-	*((char *)p + best->size - 1) = POISON_END;
-out:
-	spin_unlock(&dap_lock);
-	return p;
-}
-
-void debug_kfree(void *p)
-{
-	struct debug_alloc_header *h;
-	unsigned int h_offset;
-	if (!p)
-		return;
-	if ((char *)p < debug_alloc_pool ||
-	    (char *)p >= debug_alloc_pool + sizeof(debug_alloc_pool_aligned)) {
-		kfree(p);
-		return;
-	}
-	if (!get_dap_lock()) {
-		__release(dap_lock);	/* we never actually got it */
-		return;		/* memory leak, cannot be helped */
-	}
-	h = (struct debug_alloc_header *)((char *)p - dah_overhead);
-	memset(p, POISON_FREE, h->size - 1);
-	*((char *)p + h->size - 1) = POISON_END;
-	h->caller = NULL;
-	dah_used -= h->size;
-	h_offset = (char *)h - debug_alloc_pool;
-	if (h_offset < dah_first) {
-		h->next = dah_first;
-		dah_first = h_offset;
-	} else {
-		struct debug_alloc_header *prev;
-		unsigned int prev_offset;
-		prev = (struct debug_alloc_header *)(debug_alloc_pool +
-						     dah_first);
-		while (1) {
-			if (!prev->next || prev->next > h_offset)
-				break;
-			prev = (struct debug_alloc_header *)
-				(debug_alloc_pool + prev->next);
-		}
-		prev_offset = (char *)prev - debug_alloc_pool;
-		if (prev_offset + dah_overhead + prev->size == h_offset) {
-			prev->size += dah_overhead + h->size;
-			memset(h, POISON_FREE, dah_overhead - 1);
-			*((char *)h + dah_overhead - 1) = POISON_END;
-			h = prev;
-			h_offset = prev_offset;
-		} else {
-			h->next = prev->next;
-			prev->next = h_offset;
-		}
-	}
-	if (h_offset + dah_overhead + h->size == h->next) {
-		struct debug_alloc_header *next;
-		next = (struct debug_alloc_header *)
-			(debug_alloc_pool + h->next);
-		h->size += dah_overhead + next->size;
-		h->next = next->next;
-		memset(next, POISON_FREE, dah_overhead - 1);
-		*((char *)next + dah_overhead - 1) = POISON_END;
-	}
-	spin_unlock(&dap_lock);
-}
-
-void debug_kusage(void)
-{
-	struct debug_alloc_header *h_free, *h_used;
-#ifdef	CONFIG_IA64
-	/* FIXME: using dah for ia64 unwind always results in a memory leak.
-	 * Fix that memory leak first, then set debug_kusage_one_time = 1 for
-	 * all architectures.
-	 */
-	static int debug_kusage_one_time;
-#else
-	static int debug_kusage_one_time = 1;
-#endif
-	if (!get_dap_lock()) {
-		__release(dap_lock);	/* we never actually got it */
-		return;
-	}
-	h_free = (struct debug_alloc_header *)(debug_alloc_pool + dah_first);
-	if (dah_first == 0 &&
-	    (h_free->size == sizeof(debug_alloc_pool_aligned) - dah_overhead ||
-	     dah_first_call))
-		goto out;
-	if (!debug_kusage_one_time)
-		goto out;
-	debug_kusage_one_time = 0;
-	kdb_printf("%s: debug_kmalloc memory leak dah_first %d\n",
-		   __func__, dah_first);
-	if (dah_first) {
-		h_used = (struct debug_alloc_header *)debug_alloc_pool;
-		kdb_printf("%s: h_used %px size %d\n", __func__, h_used,
-			   h_used->size);
-	}
-	do {
-		h_used = (struct debug_alloc_header *)
-			  ((char *)h_free + dah_overhead + h_free->size);
-		kdb_printf("%s: h_used %px size %d caller %px\n",
-			   __func__, h_used, h_used->size, h_used->caller);
-		h_free = (struct debug_alloc_header *)
-			  (debug_alloc_pool + h_free->next);
-	} while (h_free->next);
-	h_used = (struct debug_alloc_header *)
-		  ((char *)h_free + dah_overhead + h_free->size);
-	if ((char *)h_used - debug_alloc_pool !=
-	    sizeof(debug_alloc_pool_aligned))
-		kdb_printf("%s: h_used %px size %d caller %px\n",
-			   __func__, h_used, h_used->size, h_used->caller);
-out:
-	spin_unlock(&dap_lock);
-}
-
 /* Maintain a small stack of kdb_flags to allow recursion without disturbing
  * the global kdb state.
  */
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH v2] kdb: Get rid of custom debug heap allocator
  2021-03-23  6:55 [PATCH v2] kdb: Get rid of custom debug heap allocator Sumit Garg
@ 2021-06-17 11:25 ` Daniel Thompson
  2021-06-17 11:51   ` Sumit Garg
  0 siblings, 1 reply; 10+ messages in thread
From: Daniel Thompson @ 2021-06-17 11:25 UTC (permalink / raw)
  To: Sumit Garg; +Cc: kgdb-bugreport, jason.wessel, dianders, linux-kernel

On Tue, Mar 23, 2021 at 12:25:19PM +0530, Sumit Garg wrote:
> Currently the only user for debug heap is kdbnearsym() which can be
> modified to rather use statically allocated buffer for symbol name as
> per it's current usage. So do that and hence remove custom debug heap
> allocator.
> 
> Note that this change puts a restriction on kdbnearsym() callers to
> carefully use shared namebuf such that a caller should consume the symbol
> returned immediately prior to another call to fetch a different symbol.
> 
> This change has been tested using kgdbtest on arm64 which doesn't show
> any regressions.
> 
> Suggested-by: Daniel Thompson <daniel.thompson@linaro.org>
> Signed-off-by: Sumit Garg <sumit.garg@linaro.org>

I'm afraid the passage of time (mostly due to my dropping the ball)
means this no longer applies cleanly to latest kernel and `git am
-3way` tells me that "sha1 information is lacking or useless".
Please can you rebase this on v5.13-rc4 and repost?

Also...


> diff --git a/kernel/debug/kdb/kdb_private.h b/kernel/debug/kdb/kdb_private.h
> index b857a84de3b5..ec91d7e02334 100644
> diff --git a/kernel/debug/kdb/kdb_support.c b/kernel/debug/kdb/kdb_support.c
> index b59aad1f0b55..e131d74abb8d 100644
> --- a/kernel/debug/kdb/kdb_support.c
> +++ b/kernel/debug/kdb/kdb_support.c
> @@ -57,35 +57,26 @@ int kdbgetsymval(const char *symname, kdb_symtab_t *symtab)
>  }
>  EXPORT_SYMBOL(kdbgetsymval);
>  
> -static char *kdb_name_table[100];	/* arbitrary size */
> -
>  /*
> - * kdbnearsym -	Return the name of the symbol with the nearest address
> - *	less than 'addr'.
> + * kdbnearsym() - Return the name of the symbol with the nearest address
> + *                less than @addr.
> + * @addr: Address to check for near symbol
> + * @symtab: Structure to receive results
>   *
> - * Parameters:
> - *	addr	Address to check for symbol near
> - *	symtab  Structure to receive results
> - * Returns:
> - *	0	No sections contain this address, symtab zero filled
> - *	1	Address mapped to module/symbol/section, data in symtab
> - * Remarks:
> - *	2.6 kallsyms has a "feature" where it unpacks the name into a
> - *	string.  If that string is reused before the caller expects it
> - *	then the caller sees its string change without warning.  To
> - *	avoid cluttering up the main kdb code with lots of kdb_strdup,
> - *	tests and kfree calls, kdbnearsym maintains an LRU list of the
> - *	last few unique strings.  The list is sized large enough to
> - *	hold active strings, no kdb caller of kdbnearsym makes more
> - *	than ~20 later calls before using a saved value.
> + * Note here that only single statically allocated namebuf is used for every
> + * symbol, so the caller should consume it immediately prior to another call
> + * to fetch a different symbol.

This still looks like it will confused experienced kernel devs who
aren't aware of kdb's calling context. Nor does it help future kdb
developers understand some of the subtlty of interactions here.

Can you enlarge this to the following (editing anything below that you
don't want git to blame you for ;-) ):

~~~
WARNING: This function may return a pointer to a single statically
allocated buffer (namebuf). kdb's unusual calling context (single
threaded, all other CPUs halted) provides us sufficient locking for
this to be safe. The only constraint imposed by the static buffer is
that the caller must consume any previous reply prior to another call
to lookup a new symbol.

Note that, strictly speaking, some architectures may re-enter the kdb
trap if the system turns out to be very badly damaged and this breaks
the single-threaded assumption above. In these circumstances successful
continuation and exit from the inner trap is unlikely to work and any
user attempting this receives a prominent warning before being allowed
to progress. In these circumstances we remain memory safe because
namebuf[KSYM_NAME_LEN-1] will never change from '\0' although we do
tolerate the possibility of garbled symbol display from the outer kdb
trap.
~~~

Thanks


Daniel.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2] kdb: Get rid of custom debug heap allocator
  2021-06-17 11:25 ` Daniel Thompson
@ 2021-06-17 11:51   ` Sumit Garg
  0 siblings, 0 replies; 10+ messages in thread
From: Sumit Garg @ 2021-06-17 11:51 UTC (permalink / raw)
  To: Daniel Thompson
  Cc: kgdb-bugreport, Jason Wessel, Douglas Anderson,
	Linux Kernel Mailing List

On Thu, 17 Jun 2021 at 16:55, Daniel Thompson
<daniel.thompson@linaro.org> wrote:
>
> On Tue, Mar 23, 2021 at 12:25:19PM +0530, Sumit Garg wrote:
> > Currently the only user for debug heap is kdbnearsym() which can be
> > modified to rather use statically allocated buffer for symbol name as
> > per it's current usage. So do that and hence remove custom debug heap
> > allocator.
> >
> > Note that this change puts a restriction on kdbnearsym() callers to
> > carefully use shared namebuf such that a caller should consume the symbol
> > returned immediately prior to another call to fetch a different symbol.
> >
> > This change has been tested using kgdbtest on arm64 which doesn't show
> > any regressions.
> >
> > Suggested-by: Daniel Thompson <daniel.thompson@linaro.org>
> > Signed-off-by: Sumit Garg <sumit.garg@linaro.org>
>
> I'm afraid the passage of time (mostly due to my dropping the ball)
> means this no longer applies cleanly to latest kernel and `git am
> -3way` tells me that "sha1 information is lacking or useless".
> Please can you rebase this on v5.13-rc4 and repost?
>

Sure.

> Also...
>
>
> > diff --git a/kernel/debug/kdb/kdb_private.h b/kernel/debug/kdb/kdb_private.h
> > index b857a84de3b5..ec91d7e02334 100644
> > diff --git a/kernel/debug/kdb/kdb_support.c b/kernel/debug/kdb/kdb_support.c
> > index b59aad1f0b55..e131d74abb8d 100644
> > --- a/kernel/debug/kdb/kdb_support.c
> > +++ b/kernel/debug/kdb/kdb_support.c
> > @@ -57,35 +57,26 @@ int kdbgetsymval(const char *symname, kdb_symtab_t *symtab)
> >  }
> >  EXPORT_SYMBOL(kdbgetsymval);
> >
> > -static char *kdb_name_table[100];    /* arbitrary size */
> > -
> >  /*
> > - * kdbnearsym -      Return the name of the symbol with the nearest address
> > - *   less than 'addr'.
> > + * kdbnearsym() - Return the name of the symbol with the nearest address
> > + *                less than @addr.
> > + * @addr: Address to check for near symbol
> > + * @symtab: Structure to receive results
> >   *
> > - * Parameters:
> > - *   addr    Address to check for symbol near
> > - *   symtab  Structure to receive results
> > - * Returns:
> > - *   0       No sections contain this address, symtab zero filled
> > - *   1       Address mapped to module/symbol/section, data in symtab
> > - * Remarks:
> > - *   2.6 kallsyms has a "feature" where it unpacks the name into a
> > - *   string.  If that string is reused before the caller expects it
> > - *   then the caller sees its string change without warning.  To
> > - *   avoid cluttering up the main kdb code with lots of kdb_strdup,
> > - *   tests and kfree calls, kdbnearsym maintains an LRU list of the
> > - *   last few unique strings.  The list is sized large enough to
> > - *   hold active strings, no kdb caller of kdbnearsym makes more
> > - *   than ~20 later calls before using a saved value.
> > + * Note here that only single statically allocated namebuf is used for every
> > + * symbol, so the caller should consume it immediately prior to another call
> > + * to fetch a different symbol.
>
> This still looks like it will confused experienced kernel devs who
> aren't aware of kdb's calling context. Nor does it help future kdb
> developers understand some of the subtlty of interactions here.
>
> Can you enlarge this to the following (editing anything below that you
> don't want git to blame you for ;-) ):
>
> ~~~
> WARNING: This function may return a pointer to a single statically
> allocated buffer (namebuf). kdb's unusual calling context (single
> threaded, all other CPUs halted) provides us sufficient locking for
> this to be safe. The only constraint imposed by the static buffer is
> that the caller must consume any previous reply prior to another call
> to lookup a new symbol.
>
> Note that, strictly speaking, some architectures may re-enter the kdb
> trap if the system turns out to be very badly damaged and this breaks
> the single-threaded assumption above. In these circumstances successful
> continuation and exit from the inner trap is unlikely to work and any
> user attempting this receives a prominent warning before being allowed
> to progress. In these circumstances we remain memory safe because
> namebuf[KSYM_NAME_LEN-1] will never change from '\0' although we do
> tolerate the possibility of garbled symbol display from the outer kdb
> trap.
> ~~~
>

Okay I will use this comment instead.

-Sumit

> Thanks
>
>
> Daniel.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2] kdb: Get rid of custom debug heap allocator
  2021-03-19 17:35         ` Daniel Thompson
@ 2021-03-23  6:26           ` Sumit Garg
  0 siblings, 0 replies; 10+ messages in thread
From: Sumit Garg @ 2021-03-23  6:26 UTC (permalink / raw)
  To: Daniel Thompson
  Cc: kgdb-bugreport, Jason Wessel, Douglas Anderson,
	Linux Kernel Mailing List

On Fri, 19 Mar 2021 at 23:05, Daniel Thompson
<daniel.thompson@linaro.org> wrote:
>
> On Mon, Mar 01, 2021 at 11:33:00AM +0530, Sumit Garg wrote:
> > On Fri, 26 Feb 2021 at 23:07, Daniel Thompson
> > <daniel.thompson@linaro.org> wrote:
> > >
> > > On Fri, Feb 26, 2021 at 06:12:13PM +0530, Sumit Garg wrote:
> > > > On Fri, 26 Feb 2021 at 16:29, Daniel Thompson
> > > > <daniel.thompson@linaro.org> wrote:
> > > > >
> > > > > On Fri, Feb 26, 2021 at 03:23:06PM +0530, Sumit Garg wrote:
> > > > > > Currently the only user for debug heap is kdbnearsym() which can be
> > > > > > modified to rather ask the caller to supply a buffer for symbol name.
> > > > > > So do that and modify kdbnearsym() callers to pass a symbol name buffer
> > > > > > allocated statically and hence remove custom debug heap allocator.
> > > > >
> > > > > Why make the callers do this?
> > > > >
> > > > > The LRU buffers were managed inside kdbnearsym() why does switching to
> > > > > an approach with a single buffer require us to push that buffer out to
> > > > > the callers?
> > > > >
> > > >
> > > > Earlier the LRU buffers managed namebuf uniqueness per caller (upto
> > > > 100 callers)
> > >
> > > The uniqueness is per symbol, not per caller.
> > >
> >
> > Agree.
> >
> > > > but if we switch to single entry in kdbnearsym() then all
> > > > callers need to share common buffer which will lead to incorrect
> > > > results from following simple sequence:
> > > >
> > > > kdbnearsym(word, &symtab1);
> > > > kdbnearsym(word, &symtab2);
> > > > kdb_symbol_print(word, &symtab1, 0);
> > > > kdb_symbol_print(word, &symtab2, 0);
> > > >
> > > > But if we change to a unique static namebuf per caller then the
> > > > following sequence will work:
> > > >
> > > > kdbnearsym(word, &symtab1, namebuf1);
> > > > kdbnearsym(word, &symtab2, namebuf2);
> > > > kdb_symbol_print(word, &symtab1, 0);
> > > > kdb_symbol_print(word, &symtab2, 0);
> > >
> > > This is true but do any of the callers of kdbnearsym ever do this?
> >
> > No, but any of prospective callers may need this.
> >
> > > The
> > > main reaason that heap stuck out as redundant was that I've only ever
> > > seen the output of kdbnearsym() consumed almost immediately by a print.
> > >
> >
> > Yeah but I think the alternative proposed in this patch isn't as
> > burdensome as the heap and tries to somewhat match existing
> > functionality.
> >
> > > I wrote an early version of a patch like this that just shrunk the LRU
> > > cache down to 2 and avoided any heap usage... but I threw it away
> > > when I realized we never carry cached values outside the function
> > > that obtained them.
> > >
> >
> > Okay, so if you still think that having a single static buffer inside
> > kdbnearsym() is an appropriate approach for time being then I will
> > switch to use that instead.
>
> Sorry to drop this thread for so long.
>
> On reflection I still have a few concerns about the current code.
> To be clear this is not really about wasting 128 bytes of RAM (your
> patch saves 256K after all).
>
> It's more that the current static buffers "look weird". They are static
> so any competent OS programmer reads them and thinks "but what about
> concurrency/reentrancy"). With the static buffers scattered through the
> code they don't have a single place to find the answer.
>
> I originally proposed handling this by the static buffer horror in
> kdbnearsym() and describing how it all works in the header comment!
> As much as anything this was to centralize the commentary in the
> contract for calling kdbnearsym(). Hence nobody should write the
> theoretic bug you describe because they read the contract!
>
> You are welcome to counter propose but you must ensure that there are
> equivalent comments so our "competent OS programmer" from the paragraph
> above can figure out how the static buffer works without having to run
> git blame` and digging out the patch history.
>

Okay, I understand your point here. Let me go ahead with a single
static buffer in kdbnearsym() with a proper header comment.

-Sumit

>
> Daniel.
>
>
>
> >
> > -Sumit
> >
> > >
> > > > > > @@ -526,6 +526,7 @@ int kdbgetaddrarg(int argc, const char **argv, int *nextarg,
> > > >
> > > > >
> > > > > > diff --git a/kernel/debug/kdb/kdb_main.c b/kernel/debug/kdb/kdb_main.c
> > > > > > index 9d69169582c6..6efe9ec53906 100644
> > > > > > --- a/kernel/debug/kdb/kdb_main.c
> > > > > > +++ b/kernel/debug/kdb/kdb_main.c
> > > > > > @@ -526,6 +526,7 @@ int kdbgetaddrarg(int argc, const char **argv, int *nextarg,
> > > > >
> > > > > The documentation comment for this function has not been updated to
> > > > > describe the new contract on callers of this function (e.g. if they
> > > > > consume the symbol name they must do so before calling kdbgetaddrarg()
> > > > > (and maybe kdbnearsym() again).
> > > > >
> > > >
> > > > I am not sure if I follow you here. If we have a unique static buffer
> > > > per caller then why do we need this new contract?
> > >
> > > I traced the code wrong. I thought it shared symtab->sym_name with its
> > > own caller... but it doesn't it shares synname with its caller and
> > > that's totally different...
> > >
> > >
> > > Daniel.
> > >
> > > >
> > > > >
> > > > > >       char symbol = '\0';
> > > > > >       char *cp;
> > > > > >       kdb_symtab_t symtab;
> > > > > > +     static char namebuf[KSYM_NAME_LEN];
> > > > > >
> > > > > >       /*
> > > > > >        * If the enable flags prohibit both arbitrary memory access
> > > > > > diff --git a/kernel/debug/kdb/kdb_support.c b/kernel/debug/kdb/kdb_support.c
> > > > > > index b59aad1f0b55..9b907a84f2db 100644
> > > > > > --- a/kernel/debug/kdb/kdb_support.c
> > > > > > +++ b/kernel/debug/kdb/kdb_support.c
> > > > > > @@ -57,8 +57,6 @@ int kdbgetsymval(const char *symname, kdb_symtab_t *symtab)
> > > > > >  }
> > > > > >  EXPORT_SYMBOL(kdbgetsymval);
> > > > > >
> > > > > > -static char *kdb_name_table[100];    /* arbitrary size */
> > > > > > -
> > > > > >  /*
> > > > > >   * kdbnearsym -      Return the name of the symbol with the nearest address
> > > > > >   *   less than 'addr'.
> > > > >
> > > > > Again the documentation comment has not been updated and, in this case,
> > > > > is now misleading.
> > > >
> > > > Okay, I will fix it.
> > > >
> > > > >
> > > > > If we move the static buffer here then the remarks section on this
> > > > > function is a really good place to describe what the callers must do to
> > > > > manage the static buffer safely as well as a convenient place to mention
> > > > > that we tolerate the reuse of the static buffer if kdb is re-entered
> > > > > becase a) kdb is broken if that happens and b) we are crash resilient
> > > > > if if does.
> > > > >
> > > > >
> > > > > > @@ -79,13 +77,11 @@ static char *kdb_name_table[100]; /* arbitrary size */
> > > > > >   *   hold active strings, no kdb caller of kdbnearsym makes more
> > > > > >   *   than ~20 later calls before using a saved value.
> > > > > >   */
> > > > > > -int kdbnearsym(unsigned long addr, kdb_symtab_t *symtab)
> > > > > > +int kdbnearsym(unsigned long addr, kdb_symtab_t *symtab, char *namebuf)
> > > > >
> > > > > As above, I don't understand why we need to add namebuf here. I think
> > > > > the prototype can remain the same.
> > > > >
> > > > > Think of it simple that we have reduce the cache from having 100 entries
> > > > > to having just 1 ;-) .
> > > >
> > > > Please see my response above.
> > > >
> > > > -Sumit
> > > >
> > > > >
> > > > >
> > > > > Daniel.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2] kdb: Get rid of custom debug heap allocator
  2021-03-01  6:03       ` Sumit Garg
@ 2021-03-19 17:35         ` Daniel Thompson
  2021-03-23  6:26           ` Sumit Garg
  0 siblings, 1 reply; 10+ messages in thread
From: Daniel Thompson @ 2021-03-19 17:35 UTC (permalink / raw)
  To: Sumit Garg
  Cc: kgdb-bugreport, Jason Wessel, Douglas Anderson,
	Linux Kernel Mailing List

On Mon, Mar 01, 2021 at 11:33:00AM +0530, Sumit Garg wrote:
> On Fri, 26 Feb 2021 at 23:07, Daniel Thompson
> <daniel.thompson@linaro.org> wrote:
> >
> > On Fri, Feb 26, 2021 at 06:12:13PM +0530, Sumit Garg wrote:
> > > On Fri, 26 Feb 2021 at 16:29, Daniel Thompson
> > > <daniel.thompson@linaro.org> wrote:
> > > >
> > > > On Fri, Feb 26, 2021 at 03:23:06PM +0530, Sumit Garg wrote:
> > > > > Currently the only user for debug heap is kdbnearsym() which can be
> > > > > modified to rather ask the caller to supply a buffer for symbol name.
> > > > > So do that and modify kdbnearsym() callers to pass a symbol name buffer
> > > > > allocated statically and hence remove custom debug heap allocator.
> > > >
> > > > Why make the callers do this?
> > > >
> > > > The LRU buffers were managed inside kdbnearsym() why does switching to
> > > > an approach with a single buffer require us to push that buffer out to
> > > > the callers?
> > > >
> > >
> > > Earlier the LRU buffers managed namebuf uniqueness per caller (upto
> > > 100 callers)
> >
> > The uniqueness is per symbol, not per caller.
> >
> 
> Agree.
> 
> > > but if we switch to single entry in kdbnearsym() then all
> > > callers need to share common buffer which will lead to incorrect
> > > results from following simple sequence:
> > >
> > > kdbnearsym(word, &symtab1);
> > > kdbnearsym(word, &symtab2);
> > > kdb_symbol_print(word, &symtab1, 0);
> > > kdb_symbol_print(word, &symtab2, 0);
> > >
> > > But if we change to a unique static namebuf per caller then the
> > > following sequence will work:
> > >
> > > kdbnearsym(word, &symtab1, namebuf1);
> > > kdbnearsym(word, &symtab2, namebuf2);
> > > kdb_symbol_print(word, &symtab1, 0);
> > > kdb_symbol_print(word, &symtab2, 0);
> >
> > This is true but do any of the callers of kdbnearsym ever do this?
> 
> No, but any of prospective callers may need this.
> 
> > The
> > main reaason that heap stuck out as redundant was that I've only ever
> > seen the output of kdbnearsym() consumed almost immediately by a print.
> >
> 
> Yeah but I think the alternative proposed in this patch isn't as
> burdensome as the heap and tries to somewhat match existing
> functionality.
> 
> > I wrote an early version of a patch like this that just shrunk the LRU
> > cache down to 2 and avoided any heap usage... but I threw it away
> > when I realized we never carry cached values outside the function
> > that obtained them.
> >
> 
> Okay, so if you still think that having a single static buffer inside
> kdbnearsym() is an appropriate approach for time being then I will
> switch to use that instead.

Sorry to drop this thread for so long.

On reflection I still have a few concerns about the current code.
To be clear this is not really about wasting 128 bytes of RAM (your
patch saves 256K after all).

It's more that the current static buffers "look weird". They are static
so any competent OS programmer reads them and thinks "but what about
concurrency/reentrancy"). With the static buffers scattered through the
code they don't have a single place to find the answer.

I originally proposed handling this by the static buffer horror in
kdbnearsym() and describing how it all works in the header comment!
As much as anything this was to centralize the commentary in the
contract for calling kdbnearsym(). Hence nobody should write the
theoretic bug you describe because they read the contract!

You are welcome to counter propose but you must ensure that there are
equivalent comments so our "competent OS programmer" from the paragraph
above can figure out how the static buffer works without having to run 
git blame` and digging out the patch history.


Daniel.



> 
> -Sumit
> 
> >
> > > > > @@ -526,6 +526,7 @@ int kdbgetaddrarg(int argc, const char **argv, int *nextarg,
> > >
> > > >
> > > > > diff --git a/kernel/debug/kdb/kdb_main.c b/kernel/debug/kdb/kdb_main.c
> > > > > index 9d69169582c6..6efe9ec53906 100644
> > > > > --- a/kernel/debug/kdb/kdb_main.c
> > > > > +++ b/kernel/debug/kdb/kdb_main.c
> > > > > @@ -526,6 +526,7 @@ int kdbgetaddrarg(int argc, const char **argv, int *nextarg,
> > > >
> > > > The documentation comment for this function has not been updated to
> > > > describe the new contract on callers of this function (e.g. if they
> > > > consume the symbol name they must do so before calling kdbgetaddrarg()
> > > > (and maybe kdbnearsym() again).
> > > >
> > >
> > > I am not sure if I follow you here. If we have a unique static buffer
> > > per caller then why do we need this new contract?
> >
> > I traced the code wrong. I thought it shared symtab->sym_name with its
> > own caller... but it doesn't it shares synname with its caller and
> > that's totally different...
> >
> >
> > Daniel.
> >
> > >
> > > >
> > > > >       char symbol = '\0';
> > > > >       char *cp;
> > > > >       kdb_symtab_t symtab;
> > > > > +     static char namebuf[KSYM_NAME_LEN];
> > > > >
> > > > >       /*
> > > > >        * If the enable flags prohibit both arbitrary memory access
> > > > > diff --git a/kernel/debug/kdb/kdb_support.c b/kernel/debug/kdb/kdb_support.c
> > > > > index b59aad1f0b55..9b907a84f2db 100644
> > > > > --- a/kernel/debug/kdb/kdb_support.c
> > > > > +++ b/kernel/debug/kdb/kdb_support.c
> > > > > @@ -57,8 +57,6 @@ int kdbgetsymval(const char *symname, kdb_symtab_t *symtab)
> > > > >  }
> > > > >  EXPORT_SYMBOL(kdbgetsymval);
> > > > >
> > > > > -static char *kdb_name_table[100];    /* arbitrary size */
> > > > > -
> > > > >  /*
> > > > >   * kdbnearsym -      Return the name of the symbol with the nearest address
> > > > >   *   less than 'addr'.
> > > >
> > > > Again the documentation comment has not been updated and, in this case,
> > > > is now misleading.
> > >
> > > Okay, I will fix it.
> > >
> > > >
> > > > If we move the static buffer here then the remarks section on this
> > > > function is a really good place to describe what the callers must do to
> > > > manage the static buffer safely as well as a convenient place to mention
> > > > that we tolerate the reuse of the static buffer if kdb is re-entered
> > > > becase a) kdb is broken if that happens and b) we are crash resilient
> > > > if if does.
> > > >
> > > >
> > > > > @@ -79,13 +77,11 @@ static char *kdb_name_table[100]; /* arbitrary size */
> > > > >   *   hold active strings, no kdb caller of kdbnearsym makes more
> > > > >   *   than ~20 later calls before using a saved value.
> > > > >   */
> > > > > -int kdbnearsym(unsigned long addr, kdb_symtab_t *symtab)
> > > > > +int kdbnearsym(unsigned long addr, kdb_symtab_t *symtab, char *namebuf)
> > > >
> > > > As above, I don't understand why we need to add namebuf here. I think
> > > > the prototype can remain the same.
> > > >
> > > > Think of it simple that we have reduce the cache from having 100 entries
> > > > to having just 1 ;-) .
> > >
> > > Please see my response above.
> > >
> > > -Sumit
> > >
> > > >
> > > >
> > > > Daniel.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2] kdb: Get rid of custom debug heap allocator
  2021-02-26 17:37     ` Daniel Thompson
@ 2021-03-01  6:03       ` Sumit Garg
  2021-03-19 17:35         ` Daniel Thompson
  0 siblings, 1 reply; 10+ messages in thread
From: Sumit Garg @ 2021-03-01  6:03 UTC (permalink / raw)
  To: Daniel Thompson
  Cc: kgdb-bugreport, Jason Wessel, Douglas Anderson,
	Linux Kernel Mailing List

On Fri, 26 Feb 2021 at 23:07, Daniel Thompson
<daniel.thompson@linaro.org> wrote:
>
> On Fri, Feb 26, 2021 at 06:12:13PM +0530, Sumit Garg wrote:
> > On Fri, 26 Feb 2021 at 16:29, Daniel Thompson
> > <daniel.thompson@linaro.org> wrote:
> > >
> > > On Fri, Feb 26, 2021 at 03:23:06PM +0530, Sumit Garg wrote:
> > > > Currently the only user for debug heap is kdbnearsym() which can be
> > > > modified to rather ask the caller to supply a buffer for symbol name.
> > > > So do that and modify kdbnearsym() callers to pass a symbol name buffer
> > > > allocated statically and hence remove custom debug heap allocator.
> > >
> > > Why make the callers do this?
> > >
> > > The LRU buffers were managed inside kdbnearsym() why does switching to
> > > an approach with a single buffer require us to push that buffer out to
> > > the callers?
> > >
> >
> > Earlier the LRU buffers managed namebuf uniqueness per caller (upto
> > 100 callers)
>
> The uniqueness is per symbol, not per caller.
>

Agree.

> > but if we switch to single entry in kdbnearsym() then all
> > callers need to share common buffer which will lead to incorrect
> > results from following simple sequence:
> >
> > kdbnearsym(word, &symtab1);
> > kdbnearsym(word, &symtab2);
> > kdb_symbol_print(word, &symtab1, 0);
> > kdb_symbol_print(word, &symtab2, 0);
> >
> > But if we change to a unique static namebuf per caller then the
> > following sequence will work:
> >
> > kdbnearsym(word, &symtab1, namebuf1);
> > kdbnearsym(word, &symtab2, namebuf2);
> > kdb_symbol_print(word, &symtab1, 0);
> > kdb_symbol_print(word, &symtab2, 0);
>
> This is true but do any of the callers of kdbnearsym ever do this?

No, but any of prospective callers may need this.

> The
> main reaason that heap stuck out as redundant was that I've only ever
> seen the output of kdbnearsym() consumed almost immediately by a print.
>

Yeah but I think the alternative proposed in this patch isn't as
burdensome as the heap and tries to somewhat match existing
functionality.

> I wrote an early version of a patch like this that just shrunk the LRU
> cache down to 2 and avoided any heap usage... but I threw it away
> when I realized we never carry cached values outside the function
> that obtained them.
>

Okay, so if you still think that having a single static buffer inside
kdbnearsym() is an appropriate approach for time being then I will
switch to use that instead.

-Sumit

>
> > > > @@ -526,6 +526,7 @@ int kdbgetaddrarg(int argc, const char **argv, int *nextarg,
> >
> > >
> > > > diff --git a/kernel/debug/kdb/kdb_main.c b/kernel/debug/kdb/kdb_main.c
> > > > index 9d69169582c6..6efe9ec53906 100644
> > > > --- a/kernel/debug/kdb/kdb_main.c
> > > > +++ b/kernel/debug/kdb/kdb_main.c
> > > > @@ -526,6 +526,7 @@ int kdbgetaddrarg(int argc, const char **argv, int *nextarg,
> > >
> > > The documentation comment for this function has not been updated to
> > > describe the new contract on callers of this function (e.g. if they
> > > consume the symbol name they must do so before calling kdbgetaddrarg()
> > > (and maybe kdbnearsym() again).
> > >
> >
> > I am not sure if I follow you here. If we have a unique static buffer
> > per caller then why do we need this new contract?
>
> I traced the code wrong. I thought it shared symtab->sym_name with its
> own caller... but it doesn't it shares synname with its caller and
> that's totally different...
>
>
> Daniel.
>
> >
> > >
> > > >       char symbol = '\0';
> > > >       char *cp;
> > > >       kdb_symtab_t symtab;
> > > > +     static char namebuf[KSYM_NAME_LEN];
> > > >
> > > >       /*
> > > >        * If the enable flags prohibit both arbitrary memory access
> > > > diff --git a/kernel/debug/kdb/kdb_support.c b/kernel/debug/kdb/kdb_support.c
> > > > index b59aad1f0b55..9b907a84f2db 100644
> > > > --- a/kernel/debug/kdb/kdb_support.c
> > > > +++ b/kernel/debug/kdb/kdb_support.c
> > > > @@ -57,8 +57,6 @@ int kdbgetsymval(const char *symname, kdb_symtab_t *symtab)
> > > >  }
> > > >  EXPORT_SYMBOL(kdbgetsymval);
> > > >
> > > > -static char *kdb_name_table[100];    /* arbitrary size */
> > > > -
> > > >  /*
> > > >   * kdbnearsym -      Return the name of the symbol with the nearest address
> > > >   *   less than 'addr'.
> > >
> > > Again the documentation comment has not been updated and, in this case,
> > > is now misleading.
> >
> > Okay, I will fix it.
> >
> > >
> > > If we move the static buffer here then the remarks section on this
> > > function is a really good place to describe what the callers must do to
> > > manage the static buffer safely as well as a convenient place to mention
> > > that we tolerate the reuse of the static buffer if kdb is re-entered
> > > becase a) kdb is broken if that happens and b) we are crash resilient
> > > if if does.
> > >
> > >
> > > > @@ -79,13 +77,11 @@ static char *kdb_name_table[100]; /* arbitrary size */
> > > >   *   hold active strings, no kdb caller of kdbnearsym makes more
> > > >   *   than ~20 later calls before using a saved value.
> > > >   */
> > > > -int kdbnearsym(unsigned long addr, kdb_symtab_t *symtab)
> > > > +int kdbnearsym(unsigned long addr, kdb_symtab_t *symtab, char *namebuf)
> > >
> > > As above, I don't understand why we need to add namebuf here. I think
> > > the prototype can remain the same.
> > >
> > > Think of it simple that we have reduce the cache from having 100 entries
> > > to having just 1 ;-) .
> >
> > Please see my response above.
> >
> > -Sumit
> >
> > >
> > >
> > > Daniel.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2] kdb: Get rid of custom debug heap allocator
  2021-02-26 12:42   ` Sumit Garg
@ 2021-02-26 17:37     ` Daniel Thompson
  2021-03-01  6:03       ` Sumit Garg
  0 siblings, 1 reply; 10+ messages in thread
From: Daniel Thompson @ 2021-02-26 17:37 UTC (permalink / raw)
  To: Sumit Garg
  Cc: kgdb-bugreport, Jason Wessel, Douglas Anderson,
	Linux Kernel Mailing List

On Fri, Feb 26, 2021 at 06:12:13PM +0530, Sumit Garg wrote:
> On Fri, 26 Feb 2021 at 16:29, Daniel Thompson
> <daniel.thompson@linaro.org> wrote:
> >
> > On Fri, Feb 26, 2021 at 03:23:06PM +0530, Sumit Garg wrote:
> > > Currently the only user for debug heap is kdbnearsym() which can be
> > > modified to rather ask the caller to supply a buffer for symbol name.
> > > So do that and modify kdbnearsym() callers to pass a symbol name buffer
> > > allocated statically and hence remove custom debug heap allocator.
> >
> > Why make the callers do this?
> >
> > The LRU buffers were managed inside kdbnearsym() why does switching to
> > an approach with a single buffer require us to push that buffer out to
> > the callers?
> >
> 
> Earlier the LRU buffers managed namebuf uniqueness per caller (upto
> 100 callers)

The uniqueness is per symbol, not per caller.

> but if we switch to single entry in kdbnearsym() then all
> callers need to share common buffer which will lead to incorrect
> results from following simple sequence:
> 
> kdbnearsym(word, &symtab1);
> kdbnearsym(word, &symtab2);
> kdb_symbol_print(word, &symtab1, 0);
> kdb_symbol_print(word, &symtab2, 0);
> 
> But if we change to a unique static namebuf per caller then the
> following sequence will work:
> 
> kdbnearsym(word, &symtab1, namebuf1);
> kdbnearsym(word, &symtab2, namebuf2);
> kdb_symbol_print(word, &symtab1, 0);
> kdb_symbol_print(word, &symtab2, 0);

This is true but do any of the callers of kdbnearsym ever do this? The
main reaason that heap stuck out as redundant was that I've only ever
seen the output of kdbnearsym() consumed almost immediately by a print.

I wrote an early version of a patch like this that just shrunk the LRU
cache down to 2 and avoided any heap usage... but I threw it away
when I realized we never carry cached values outside the function
that obtained them.


> > > @@ -526,6 +526,7 @@ int kdbgetaddrarg(int argc, const char **argv, int *nextarg,
> 
> >
> > > diff --git a/kernel/debug/kdb/kdb_main.c b/kernel/debug/kdb/kdb_main.c
> > > index 9d69169582c6..6efe9ec53906 100644
> > > --- a/kernel/debug/kdb/kdb_main.c
> > > +++ b/kernel/debug/kdb/kdb_main.c
> > > @@ -526,6 +526,7 @@ int kdbgetaddrarg(int argc, const char **argv, int *nextarg,
> >
> > The documentation comment for this function has not been updated to
> > describe the new contract on callers of this function (e.g. if they
> > consume the symbol name they must do so before calling kdbgetaddrarg()
> > (and maybe kdbnearsym() again).
> >
> 
> I am not sure if I follow you here. If we have a unique static buffer
> per caller then why do we need this new contract?

I traced the code wrong. I thought it shared symtab->sym_name with its
own caller... but it doesn't it shares synname with its caller and
that's totally different...


Daniel.

> 
> >
> > >       char symbol = '\0';
> > >       char *cp;
> > >       kdb_symtab_t symtab;
> > > +     static char namebuf[KSYM_NAME_LEN];
> > >
> > >       /*
> > >        * If the enable flags prohibit both arbitrary memory access
> > > diff --git a/kernel/debug/kdb/kdb_support.c b/kernel/debug/kdb/kdb_support.c
> > > index b59aad1f0b55..9b907a84f2db 100644
> > > --- a/kernel/debug/kdb/kdb_support.c
> > > +++ b/kernel/debug/kdb/kdb_support.c
> > > @@ -57,8 +57,6 @@ int kdbgetsymval(const char *symname, kdb_symtab_t *symtab)
> > >  }
> > >  EXPORT_SYMBOL(kdbgetsymval);
> > >
> > > -static char *kdb_name_table[100];    /* arbitrary size */
> > > -
> > >  /*
> > >   * kdbnearsym -      Return the name of the symbol with the nearest address
> > >   *   less than 'addr'.
> >
> > Again the documentation comment has not been updated and, in this case,
> > is now misleading.
> 
> Okay, I will fix it.
> 
> >
> > If we move the static buffer here then the remarks section on this
> > function is a really good place to describe what the callers must do to
> > manage the static buffer safely as well as a convenient place to mention
> > that we tolerate the reuse of the static buffer if kdb is re-entered
> > becase a) kdb is broken if that happens and b) we are crash resilient
> > if if does.
> >
> >
> > > @@ -79,13 +77,11 @@ static char *kdb_name_table[100]; /* arbitrary size */
> > >   *   hold active strings, no kdb caller of kdbnearsym makes more
> > >   *   than ~20 later calls before using a saved value.
> > >   */
> > > -int kdbnearsym(unsigned long addr, kdb_symtab_t *symtab)
> > > +int kdbnearsym(unsigned long addr, kdb_symtab_t *symtab, char *namebuf)
> >
> > As above, I don't understand why we need to add namebuf here. I think
> > the prototype can remain the same.
> >
> > Think of it simple that we have reduce the cache from having 100 entries
> > to having just 1 ;-) .
> 
> Please see my response above.
> 
> -Sumit
> 
> >
> >
> > Daniel.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2] kdb: Get rid of custom debug heap allocator
  2021-02-26 10:59 ` Daniel Thompson
@ 2021-02-26 12:42   ` Sumit Garg
  2021-02-26 17:37     ` Daniel Thompson
  0 siblings, 1 reply; 10+ messages in thread
From: Sumit Garg @ 2021-02-26 12:42 UTC (permalink / raw)
  To: Daniel Thompson
  Cc: kgdb-bugreport, Jason Wessel, Douglas Anderson,
	Linux Kernel Mailing List

On Fri, 26 Feb 2021 at 16:29, Daniel Thompson
<daniel.thompson@linaro.org> wrote:
>
> On Fri, Feb 26, 2021 at 03:23:06PM +0530, Sumit Garg wrote:
> > Currently the only user for debug heap is kdbnearsym() which can be
> > modified to rather ask the caller to supply a buffer for symbol name.
> > So do that and modify kdbnearsym() callers to pass a symbol name buffer
> > allocated statically and hence remove custom debug heap allocator.
>
> Why make the callers do this?
>
> The LRU buffers were managed inside kdbnearsym() why does switching to
> an approach with a single buffer require us to push that buffer out to
> the callers?
>

Earlier the LRU buffers managed namebuf uniqueness per caller (upto
100 callers) but if we switch to single entry in kdbnearsym() then all
callers need to share common buffer which will lead to incorrect
results from following simple sequence:

kdbnearsym(word, &symtab1);
kdbnearsym(word, &symtab2);
kdb_symbol_print(word, &symtab1, 0);
kdb_symbol_print(word, &symtab2, 0);

But if we change to a unique static namebuf per caller then the
following sequence will work:

kdbnearsym(word, &symtab1, namebuf1);
kdbnearsym(word, &symtab2, namebuf2);
kdb_symbol_print(word, &symtab1, 0);
kdb_symbol_print(word, &symtab2, 0);

>
> > diff --git a/kernel/debug/kdb/kdb_main.c b/kernel/debug/kdb/kdb_main.c
> > index 9d69169582c6..6efe9ec53906 100644
> > --- a/kernel/debug/kdb/kdb_main.c
> > +++ b/kernel/debug/kdb/kdb_main.c
> > @@ -526,6 +526,7 @@ int kdbgetaddrarg(int argc, const char **argv, int *nextarg,
>
> The documentation comment for this function has not been updated to
> describe the new contract on callers of this function (e.g. if they
> consume the symbol name they must do so before calling kdbgetaddrarg()
> (and maybe kdbnearsym() again).
>

I am not sure if I follow you here. If we have a unique static buffer
per caller then why do we need this new contract?

>
> >       char symbol = '\0';
> >       char *cp;
> >       kdb_symtab_t symtab;
> > +     static char namebuf[KSYM_NAME_LEN];
> >
> >       /*
> >        * If the enable flags prohibit both arbitrary memory access
> > diff --git a/kernel/debug/kdb/kdb_support.c b/kernel/debug/kdb/kdb_support.c
> > index b59aad1f0b55..9b907a84f2db 100644
> > --- a/kernel/debug/kdb/kdb_support.c
> > +++ b/kernel/debug/kdb/kdb_support.c
> > @@ -57,8 +57,6 @@ int kdbgetsymval(const char *symname, kdb_symtab_t *symtab)
> >  }
> >  EXPORT_SYMBOL(kdbgetsymval);
> >
> > -static char *kdb_name_table[100];    /* arbitrary size */
> > -
> >  /*
> >   * kdbnearsym -      Return the name of the symbol with the nearest address
> >   *   less than 'addr'.
>
> Again the documentation comment has not been updated and, in this case,
> is now misleading.

Okay, I will fix it.

>
> If we move the static buffer here then the remarks section on this
> function is a really good place to describe what the callers must do to
> manage the static buffer safely as well as a convenient place to mention
> that we tolerate the reuse of the static buffer if kdb is re-entered
> becase a) kdb is broken if that happens and b) we are crash resilient
> if if does.
>
>
> > @@ -79,13 +77,11 @@ static char *kdb_name_table[100]; /* arbitrary size */
> >   *   hold active strings, no kdb caller of kdbnearsym makes more
> >   *   than ~20 later calls before using a saved value.
> >   */
> > -int kdbnearsym(unsigned long addr, kdb_symtab_t *symtab)
> > +int kdbnearsym(unsigned long addr, kdb_symtab_t *symtab, char *namebuf)
>
> As above, I don't understand why we need to add namebuf here. I think
> the prototype can remain the same.
>
> Think of it simple that we have reduce the cache from having 100 entries
> to having just 1 ;-) .

Please see my response above.

-Sumit

>
>
> Daniel.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2] kdb: Get rid of custom debug heap allocator
  2021-02-26  9:53 Sumit Garg
@ 2021-02-26 10:59 ` Daniel Thompson
  2021-02-26 12:42   ` Sumit Garg
  0 siblings, 1 reply; 10+ messages in thread
From: Daniel Thompson @ 2021-02-26 10:59 UTC (permalink / raw)
  To: Sumit Garg; +Cc: kgdb-bugreport, jason.wessel, dianders, linux-kernel

On Fri, Feb 26, 2021 at 03:23:06PM +0530, Sumit Garg wrote:
> Currently the only user for debug heap is kdbnearsym() which can be
> modified to rather ask the caller to supply a buffer for symbol name.
> So do that and modify kdbnearsym() callers to pass a symbol name buffer
> allocated statically and hence remove custom debug heap allocator.

Why make the callers do this?

The LRU buffers were managed inside kdbnearsym() why does switching to
an approach with a single buffer require us to push that buffer out to
the callers?


> diff --git a/kernel/debug/kdb/kdb_main.c b/kernel/debug/kdb/kdb_main.c
> index 9d69169582c6..6efe9ec53906 100644
> --- a/kernel/debug/kdb/kdb_main.c
> +++ b/kernel/debug/kdb/kdb_main.c
> @@ -526,6 +526,7 @@ int kdbgetaddrarg(int argc, const char **argv, int *nextarg,

The documentation comment for this function has not been updated to
describe the new contract on callers of this function (e.g. if they
consume the symbol name they must do so before calling kdbgetaddrarg()
(and maybe kdbnearsym() again).


>  	char symbol = '\0';
>  	char *cp;
>  	kdb_symtab_t symtab;
> +	static char namebuf[KSYM_NAME_LEN];
>  
>  	/*
>  	 * If the enable flags prohibit both arbitrary memory access
> diff --git a/kernel/debug/kdb/kdb_support.c b/kernel/debug/kdb/kdb_support.c
> index b59aad1f0b55..9b907a84f2db 100644
> --- a/kernel/debug/kdb/kdb_support.c
> +++ b/kernel/debug/kdb/kdb_support.c
> @@ -57,8 +57,6 @@ int kdbgetsymval(const char *symname, kdb_symtab_t *symtab)
>  }
>  EXPORT_SYMBOL(kdbgetsymval);
>  
> -static char *kdb_name_table[100];	/* arbitrary size */
> -
>  /*
>   * kdbnearsym -	Return the name of the symbol with the nearest address
>   *	less than 'addr'.

Again the documentation comment has not been updated and, in this case,
is now misleading.

If we move the static buffer here then the remarks section on this
function is a really good place to describe what the callers must do to
manage the static buffer safely as well as a convenient place to mention
that we tolerate the reuse of the static buffer if kdb is re-entered
becase a) kdb is broken if that happens and b) we are crash resilient
if if does.


> @@ -79,13 +77,11 @@ static char *kdb_name_table[100];	/* arbitrary size */
>   *	hold active strings, no kdb caller of kdbnearsym makes more
>   *	than ~20 later calls before using a saved value.
>   */
> -int kdbnearsym(unsigned long addr, kdb_symtab_t *symtab)
> +int kdbnearsym(unsigned long addr, kdb_symtab_t *symtab, char *namebuf)

As above, I don't understand why we need to add namebuf here. I think
the prototype can remain the same.

Think of it simple that we have reduce the cache from having 100 entries
to having just 1 ;-) .


Daniel.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH v2] kdb: Get rid of custom debug heap allocator
@ 2021-02-26  9:53 Sumit Garg
  2021-02-26 10:59 ` Daniel Thompson
  0 siblings, 1 reply; 10+ messages in thread
From: Sumit Garg @ 2021-02-26  9:53 UTC (permalink / raw)
  To: kgdb-bugreport
  Cc: daniel.thompson, jason.wessel, dianders, linux-kernel, Sumit Garg

Currently the only user for debug heap is kdbnearsym() which can be
modified to rather ask the caller to supply a buffer for symbol name.
So do that and modify kdbnearsym() callers to pass a symbol name buffer
allocated statically and hence remove custom debug heap allocator.

This change has been tested using kgdbtest on arm64 which doesn't show
any regressions.

Suggested-by: Daniel Thompson <daniel.thompson@linaro.org>
Signed-off-by: Sumit Garg <sumit.garg@linaro.org>
---

Changes in v2:
- Allocate namebuf statically instead of stack to maintain debugger
  robustness.

 kernel/debug/kdb/kdb_debugger.c |   1 -
 kernel/debug/kdb/kdb_main.c     |   6 +-
 kernel/debug/kdb/kdb_private.h  |   7 +-
 kernel/debug/kdb/kdb_support.c  | 294 +-------------------------------
 4 files changed, 11 insertions(+), 297 deletions(-)

diff --git a/kernel/debug/kdb/kdb_debugger.c b/kernel/debug/kdb/kdb_debugger.c
index 0220afda3200..e91fc3e4edd5 100644
--- a/kernel/debug/kdb/kdb_debugger.c
+++ b/kernel/debug/kdb/kdb_debugger.c
@@ -140,7 +140,6 @@ int kdb_stub(struct kgdb_state *ks)
 	 */
 	kdb_common_deinit_state();
 	KDB_STATE_CLEAR(PAGER);
-	kdbnearsym_cleanup();
 	if (error == KDB_CMD_KGDB) {
 		if (KDB_STATE(DOING_KGDB))
 			KDB_STATE_CLEAR(DOING_KGDB);
diff --git a/kernel/debug/kdb/kdb_main.c b/kernel/debug/kdb/kdb_main.c
index 9d69169582c6..6efe9ec53906 100644
--- a/kernel/debug/kdb/kdb_main.c
+++ b/kernel/debug/kdb/kdb_main.c
@@ -526,6 +526,7 @@ int kdbgetaddrarg(int argc, const char **argv, int *nextarg,
 	char symbol = '\0';
 	char *cp;
 	kdb_symtab_t symtab;
+	static char namebuf[KSYM_NAME_LEN];
 
 	/*
 	 * If the enable flags prohibit both arbitrary memory access
@@ -585,7 +586,7 @@ int kdbgetaddrarg(int argc, const char **argv, int *nextarg,
 	}
 
 	if (!found)
-		found = kdbnearsym(addr, &symtab);
+		found = kdbnearsym(addr, &symtab, namebuf);
 
 	(*nextarg)++;
 
@@ -1503,6 +1504,7 @@ static void kdb_md_line(const char *fmtstr, unsigned long addr,
 	int i;
 	int j;
 	unsigned long word;
+	static char namebuf[KSYM_NAME_LEN];
 
 	memset(cbuf, '\0', sizeof(cbuf));
 	if (phys)
@@ -1518,7 +1520,7 @@ static void kdb_md_line(const char *fmtstr, unsigned long addr,
 			break;
 		kdb_printf(fmtstr, word);
 		if (symbolic)
-			kdbnearsym(word, &symtab);
+			kdbnearsym(word, &symtab, namebuf);
 		else
 			memset(&symtab, 0, sizeof(symtab));
 		if (symtab.sym_name) {
diff --git a/kernel/debug/kdb/kdb_private.h b/kernel/debug/kdb/kdb_private.h
index b857a84de3b5..1707eeebc59a 100644
--- a/kernel/debug/kdb/kdb_private.h
+++ b/kernel/debug/kdb/kdb_private.h
@@ -108,8 +108,7 @@ extern char *kdbgetenv(const char *);
 extern int kdbgetaddrarg(int, const char **, int*, unsigned long *,
 			 long *, char **);
 extern int kdbgetsymval(const char *, kdb_symtab_t *);
-extern int kdbnearsym(unsigned long, kdb_symtab_t *);
-extern void kdbnearsym_cleanup(void);
+extern int kdbnearsym(unsigned long addr, kdb_symtab_t *symtab, char *namebuf);
 extern char *kdb_strdup(const char *str, gfp_t type);
 extern void kdb_symbol_print(unsigned long, const kdb_symtab_t *, unsigned int);
 
@@ -233,10 +232,6 @@ extern struct task_struct *kdb_curr_task(int);
 
 #define GFP_KDB (in_dbg_master() ? GFP_ATOMIC : GFP_KERNEL)
 
-extern void *debug_kmalloc(size_t size, gfp_t flags);
-extern void debug_kfree(void *);
-extern void debug_kusage(void);
-
 extern struct task_struct *kdb_current_task;
 extern struct pt_regs *kdb_current_regs;
 
diff --git a/kernel/debug/kdb/kdb_support.c b/kernel/debug/kdb/kdb_support.c
index b59aad1f0b55..9b907a84f2db 100644
--- a/kernel/debug/kdb/kdb_support.c
+++ b/kernel/debug/kdb/kdb_support.c
@@ -57,8 +57,6 @@ int kdbgetsymval(const char *symname, kdb_symtab_t *symtab)
 }
 EXPORT_SYMBOL(kdbgetsymval);
 
-static char *kdb_name_table[100];	/* arbitrary size */
-
 /*
  * kdbnearsym -	Return the name of the symbol with the nearest address
  *	less than 'addr'.
@@ -79,13 +77,11 @@ static char *kdb_name_table[100];	/* arbitrary size */
  *	hold active strings, no kdb caller of kdbnearsym makes more
  *	than ~20 later calls before using a saved value.
  */
-int kdbnearsym(unsigned long addr, kdb_symtab_t *symtab)
+int kdbnearsym(unsigned long addr, kdb_symtab_t *symtab, char *namebuf)
 {
 	int ret = 0;
 	unsigned long symbolsize = 0;
 	unsigned long offset = 0;
-#define knt1_size 128		/* must be >= kallsyms table size */
-	char *knt1 = NULL;
 
 	if (KDB_DEBUG(AR))
 		kdb_printf("kdbnearsym: addr=0x%lx, symtab=%px\n", addr, symtab);
@@ -93,14 +89,9 @@ int kdbnearsym(unsigned long addr, kdb_symtab_t *symtab)
 
 	if (addr < 4096)
 		goto out;
-	knt1 = debug_kmalloc(knt1_size, GFP_ATOMIC);
-	if (!knt1) {
-		kdb_printf("kdbnearsym: addr=0x%lx cannot kmalloc knt1\n",
-			   addr);
-		goto out;
-	}
+
 	symtab->sym_name = kallsyms_lookup(addr, &symbolsize , &offset,
-				(char **)(&symtab->mod_name), knt1);
+				(char **)(&symtab->mod_name), namebuf);
 	if (offset > 8*1024*1024) {
 		symtab->sym_name = NULL;
 		addr = offset = symbolsize = 0;
@@ -109,42 +100,6 @@ int kdbnearsym(unsigned long addr, kdb_symtab_t *symtab)
 	symtab->sym_end = symtab->sym_start + symbolsize;
 	ret = symtab->sym_name != NULL && *(symtab->sym_name) != '\0';
 
-	if (ret) {
-		int i;
-		/* Another 2.6 kallsyms "feature".  Sometimes the sym_name is
-		 * set but the buffer passed into kallsyms_lookup is not used,
-		 * so it contains garbage.  The caller has to work out which
-		 * buffer needs to be saved.
-		 *
-		 * What was Rusty smoking when he wrote that code?
-		 */
-		if (symtab->sym_name != knt1) {
-			strncpy(knt1, symtab->sym_name, knt1_size);
-			knt1[knt1_size-1] = '\0';
-		}
-		for (i = 0; i < ARRAY_SIZE(kdb_name_table); ++i) {
-			if (kdb_name_table[i] &&
-			    strcmp(kdb_name_table[i], knt1) == 0)
-				break;
-		}
-		if (i >= ARRAY_SIZE(kdb_name_table)) {
-			debug_kfree(kdb_name_table[0]);
-			memmove(kdb_name_table, kdb_name_table+1,
-			       sizeof(kdb_name_table[0]) *
-			       (ARRAY_SIZE(kdb_name_table)-1));
-		} else {
-			debug_kfree(knt1);
-			knt1 = kdb_name_table[i];
-			memmove(kdb_name_table+i, kdb_name_table+i+1,
-			       sizeof(kdb_name_table[0]) *
-			       (ARRAY_SIZE(kdb_name_table)-i-1));
-		}
-		i = ARRAY_SIZE(kdb_name_table) - 1;
-		kdb_name_table[i] = knt1;
-		symtab->sym_name = kdb_name_table[i];
-		knt1 = NULL;
-	}
-
 	if (symtab->mod_name == NULL)
 		symtab->mod_name = "kernel";
 	if (KDB_DEBUG(AR))
@@ -152,23 +107,10 @@ int kdbnearsym(unsigned long addr, kdb_symtab_t *symtab)
 		   "symtab->mod_name=%px, symtab->sym_name=%px (%s)\n", ret,
 		   symtab->sym_start, symtab->mod_name, symtab->sym_name,
 		   symtab->sym_name);
-
 out:
-	debug_kfree(knt1);
 	return ret;
 }
 
-void kdbnearsym_cleanup(void)
-{
-	int i;
-	for (i = 0; i < ARRAY_SIZE(kdb_name_table); ++i) {
-		if (kdb_name_table[i]) {
-			debug_kfree(kdb_name_table[i]);
-			kdb_name_table[i] = NULL;
-		}
-	}
-}
-
 static char ks_namebuf[KSYM_NAME_LEN+1], ks_namebuf_prev[KSYM_NAME_LEN+1];
 
 /*
@@ -259,11 +201,13 @@ void kdb_symbol_print(unsigned long addr, const kdb_symtab_t *symtab_p,
 		      unsigned int punc)
 {
 	kdb_symtab_t symtab, *symtab_p2;
+	static char namebuf[KSYM_NAME_LEN];
+
 	if (symtab_p) {
 		symtab_p2 = (kdb_symtab_t *)symtab_p;
 	} else {
 		symtab_p2 = &symtab;
-		kdbnearsym(addr, symtab_p2);
+		kdbnearsym(addr, symtab_p2, namebuf);
 	}
 	if (!(symtab_p2->sym_name || (punc & KDB_SP_VALUE)))
 		return;
@@ -665,232 +609,6 @@ unsigned long kdb_task_state(const struct task_struct *p, unsigned long mask)
 	return (mask & kdb_task_state_string(state)) != 0;
 }
 
-/* Last ditch allocator for debugging, so we can still debug even when
- * the GFP_ATOMIC pool has been exhausted.  The algorithms are tuned
- * for space usage, not for speed.  One smallish memory pool, the free
- * chain is always in ascending address order to allow coalescing,
- * allocations are done in brute force best fit.
- */
-
-struct debug_alloc_header {
-	u32 next;	/* offset of next header from start of pool */
-	u32 size;
-	void *caller;
-};
-
-/* The memory returned by this allocator must be aligned, which means
- * so must the header size.  Do not assume that sizeof(struct
- * debug_alloc_header) is a multiple of the alignment, explicitly
- * calculate the overhead of this header, including the alignment.
- * The rest of this code must not use sizeof() on any header or
- * pointer to a header.
- */
-#define dah_align 8
-#define dah_overhead ALIGN(sizeof(struct debug_alloc_header), dah_align)
-
-static u64 debug_alloc_pool_aligned[256*1024/dah_align];	/* 256K pool */
-static char *debug_alloc_pool = (char *)debug_alloc_pool_aligned;
-static u32 dah_first, dah_first_call = 1, dah_used, dah_used_max;
-
-/* Locking is awkward.  The debug code is called from all contexts,
- * including non maskable interrupts.  A normal spinlock is not safe
- * in NMI context.  Try to get the debug allocator lock, if it cannot
- * be obtained after a second then give up.  If the lock could not be
- * previously obtained on this cpu then only try once.
- *
- * sparse has no annotation for "this function _sometimes_ acquires a
- * lock", so fudge the acquire/release notation.
- */
-static DEFINE_SPINLOCK(dap_lock);
-static int get_dap_lock(void)
-	__acquires(dap_lock)
-{
-	static int dap_locked = -1;
-	int count;
-	if (dap_locked == smp_processor_id())
-		count = 1;
-	else
-		count = 1000;
-	while (1) {
-		if (spin_trylock(&dap_lock)) {
-			dap_locked = -1;
-			return 1;
-		}
-		if (!count--)
-			break;
-		udelay(1000);
-	}
-	dap_locked = smp_processor_id();
-	__acquire(dap_lock);
-	return 0;
-}
-
-void *debug_kmalloc(size_t size, gfp_t flags)
-{
-	unsigned int rem, h_offset;
-	struct debug_alloc_header *best, *bestprev, *prev, *h;
-	void *p = NULL;
-	if (!get_dap_lock()) {
-		__release(dap_lock);	/* we never actually got it */
-		return NULL;
-	}
-	h = (struct debug_alloc_header *)(debug_alloc_pool + dah_first);
-	if (dah_first_call) {
-		h->size = sizeof(debug_alloc_pool_aligned) - dah_overhead;
-		dah_first_call = 0;
-	}
-	size = ALIGN(size, dah_align);
-	prev = best = bestprev = NULL;
-	while (1) {
-		if (h->size >= size && (!best || h->size < best->size)) {
-			best = h;
-			bestprev = prev;
-			if (h->size == size)
-				break;
-		}
-		if (!h->next)
-			break;
-		prev = h;
-		h = (struct debug_alloc_header *)(debug_alloc_pool + h->next);
-	}
-	if (!best)
-		goto out;
-	rem = best->size - size;
-	/* The pool must always contain at least one header */
-	if (best->next == 0 && bestprev == NULL && rem < dah_overhead)
-		goto out;
-	if (rem >= dah_overhead) {
-		best->size = size;
-		h_offset = ((char *)best - debug_alloc_pool) +
-			   dah_overhead + best->size;
-		h = (struct debug_alloc_header *)(debug_alloc_pool + h_offset);
-		h->size = rem - dah_overhead;
-		h->next = best->next;
-	} else
-		h_offset = best->next;
-	best->caller = __builtin_return_address(0);
-	dah_used += best->size;
-	dah_used_max = max(dah_used, dah_used_max);
-	if (bestprev)
-		bestprev->next = h_offset;
-	else
-		dah_first = h_offset;
-	p = (char *)best + dah_overhead;
-	memset(p, POISON_INUSE, best->size - 1);
-	*((char *)p + best->size - 1) = POISON_END;
-out:
-	spin_unlock(&dap_lock);
-	return p;
-}
-
-void debug_kfree(void *p)
-{
-	struct debug_alloc_header *h;
-	unsigned int h_offset;
-	if (!p)
-		return;
-	if ((char *)p < debug_alloc_pool ||
-	    (char *)p >= debug_alloc_pool + sizeof(debug_alloc_pool_aligned)) {
-		kfree(p);
-		return;
-	}
-	if (!get_dap_lock()) {
-		__release(dap_lock);	/* we never actually got it */
-		return;		/* memory leak, cannot be helped */
-	}
-	h = (struct debug_alloc_header *)((char *)p - dah_overhead);
-	memset(p, POISON_FREE, h->size - 1);
-	*((char *)p + h->size - 1) = POISON_END;
-	h->caller = NULL;
-	dah_used -= h->size;
-	h_offset = (char *)h - debug_alloc_pool;
-	if (h_offset < dah_first) {
-		h->next = dah_first;
-		dah_first = h_offset;
-	} else {
-		struct debug_alloc_header *prev;
-		unsigned int prev_offset;
-		prev = (struct debug_alloc_header *)(debug_alloc_pool +
-						     dah_first);
-		while (1) {
-			if (!prev->next || prev->next > h_offset)
-				break;
-			prev = (struct debug_alloc_header *)
-				(debug_alloc_pool + prev->next);
-		}
-		prev_offset = (char *)prev - debug_alloc_pool;
-		if (prev_offset + dah_overhead + prev->size == h_offset) {
-			prev->size += dah_overhead + h->size;
-			memset(h, POISON_FREE, dah_overhead - 1);
-			*((char *)h + dah_overhead - 1) = POISON_END;
-			h = prev;
-			h_offset = prev_offset;
-		} else {
-			h->next = prev->next;
-			prev->next = h_offset;
-		}
-	}
-	if (h_offset + dah_overhead + h->size == h->next) {
-		struct debug_alloc_header *next;
-		next = (struct debug_alloc_header *)
-			(debug_alloc_pool + h->next);
-		h->size += dah_overhead + next->size;
-		h->next = next->next;
-		memset(next, POISON_FREE, dah_overhead - 1);
-		*((char *)next + dah_overhead - 1) = POISON_END;
-	}
-	spin_unlock(&dap_lock);
-}
-
-void debug_kusage(void)
-{
-	struct debug_alloc_header *h_free, *h_used;
-#ifdef	CONFIG_IA64
-	/* FIXME: using dah for ia64 unwind always results in a memory leak.
-	 * Fix that memory leak first, then set debug_kusage_one_time = 1 for
-	 * all architectures.
-	 */
-	static int debug_kusage_one_time;
-#else
-	static int debug_kusage_one_time = 1;
-#endif
-	if (!get_dap_lock()) {
-		__release(dap_lock);	/* we never actually got it */
-		return;
-	}
-	h_free = (struct debug_alloc_header *)(debug_alloc_pool + dah_first);
-	if (dah_first == 0 &&
-	    (h_free->size == sizeof(debug_alloc_pool_aligned) - dah_overhead ||
-	     dah_first_call))
-		goto out;
-	if (!debug_kusage_one_time)
-		goto out;
-	debug_kusage_one_time = 0;
-	kdb_printf("%s: debug_kmalloc memory leak dah_first %d\n",
-		   __func__, dah_first);
-	if (dah_first) {
-		h_used = (struct debug_alloc_header *)debug_alloc_pool;
-		kdb_printf("%s: h_used %px size %d\n", __func__, h_used,
-			   h_used->size);
-	}
-	do {
-		h_used = (struct debug_alloc_header *)
-			  ((char *)h_free + dah_overhead + h_free->size);
-		kdb_printf("%s: h_used %px size %d caller %px\n",
-			   __func__, h_used, h_used->size, h_used->caller);
-		h_free = (struct debug_alloc_header *)
-			  (debug_alloc_pool + h_free->next);
-	} while (h_free->next);
-	h_used = (struct debug_alloc_header *)
-		  ((char *)h_free + dah_overhead + h_free->size);
-	if ((char *)h_used - debug_alloc_pool !=
-	    sizeof(debug_alloc_pool_aligned))
-		kdb_printf("%s: h_used %px size %d caller %px\n",
-			   __func__, h_used, h_used->size, h_used->caller);
-out:
-	spin_unlock(&dap_lock);
-}
-
 /* Maintain a small stack of kdb_flags to allow recursion without disturbing
  * the global kdb state.
  */
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2021-06-17 11:51 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-03-23  6:55 [PATCH v2] kdb: Get rid of custom debug heap allocator Sumit Garg
2021-06-17 11:25 ` Daniel Thompson
2021-06-17 11:51   ` Sumit Garg
  -- strict thread matches above, loose matches on Subject: below --
2021-02-26  9:53 Sumit Garg
2021-02-26 10:59 ` Daniel Thompson
2021-02-26 12:42   ` Sumit Garg
2021-02-26 17:37     ` Daniel Thompson
2021-03-01  6:03       ` Sumit Garg
2021-03-19 17:35         ` Daniel Thompson
2021-03-23  6:26           ` Sumit Garg

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).