linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 1/2 v3] lib: debugobjects: export max loops counter
@ 2018-01-03 23:56 Yang Shi
  2018-01-03 23:56 ` [PATCH 2/2 v3] lib: debugobjects: touch watchdog to avoid softlockup when !CONFIG_PREEMPT Yang Shi
  0 siblings, 1 reply; 8+ messages in thread
From: Yang Shi @ 2018-01-03 23:56 UTC (permalink / raw)
  To: longman, tglx; +Cc: Yang Shi, linux-kernel

Currently max chain counter is exported to debugfs, it just record the
counter of inner loop, however, there might be significant iterations of
external loop then it may take significant amount of time to finish all
of the checks. This may cause lockup on !CONFIG_PREEMPT kernel build
occasionally.

Record the counter of the max loops then export to sysfs so that the
user can be aware of the real overhead.

Then the output of /sys/kernel/debug/debug_objects/stats looks like:

max_chain     :121
max_loops     :543267
warnings      :0
fixups        :0
pool_free     :1764
pool_min_free :341
pool_used     :86438
pool_max_used :268887
objs_allocated:6068254
objs_freed    :5981076

Signed-off-by: Yang Shi <yang.s@alibaba-inc.com>
CC: Waiman Long <longman@redhat.com>
CC: Thomas Gleixner <tglx@linutronix.de>
---
v1 --> v2:
 * Correct the typo in the commit log

 lib/debugobjects.c | 9 ++++++++-
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/lib/debugobjects.c b/lib/debugobjects.c
index 2f5349c..166488d 100644
--- a/lib/debugobjects.c
+++ b/lib/debugobjects.c
@@ -50,6 +50,7 @@ struct debug_bucket {
 static struct kmem_cache	*obj_cache;
 
 static int			debug_objects_maxchain __read_mostly;
+static int			debug_objects_maxloops __read_mostly;
 static int			debug_objects_fixups __read_mostly;
 static int			debug_objects_warnings __read_mostly;
 static int			debug_objects_enabled __read_mostly
@@ -720,7 +721,7 @@ static void __debug_check_no_obj_freed(const void *address, unsigned long size)
 	enum debug_obj_state state;
 	struct debug_bucket *db;
 	struct debug_obj *obj;
-	int cnt;
+	int cnt, max_loops = 0;
 
 	saddr = (unsigned long) address;
 	eaddr = saddr + size;
@@ -765,7 +766,12 @@ static void __debug_check_no_obj_freed(const void *address, unsigned long size)
 
 		if (cnt > debug_objects_maxchain)
 			debug_objects_maxchain = cnt;
+
+		max_loops += cnt;
 	}
+
+	if (max_loops > debug_objects_maxloops)
+		debug_objects_maxloops = max_loops;
 }
 
 void debug_check_no_obj_freed(const void *address, unsigned long size)
@@ -780,6 +786,7 @@ void debug_check_no_obj_freed(const void *address, unsigned long size)
 static int debug_stats_show(struct seq_file *m, void *v)
 {
 	seq_printf(m, "max_chain     :%d\n", debug_objects_maxchain);
+	seq_printf(m, "max_loops     :%d\n", debug_objects_maxloops);
 	seq_printf(m, "warnings      :%d\n", debug_objects_warnings);
 	seq_printf(m, "fixups        :%d\n", debug_objects_fixups);
 	seq_printf(m, "pool_free     :%d\n", obj_pool_free);
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH 2/2 v3] lib: debugobjects: touch watchdog to avoid softlockup when !CONFIG_PREEMPT
  2018-01-03 23:56 [PATCH 1/2 v3] lib: debugobjects: export max loops counter Yang Shi
@ 2018-01-03 23:56 ` Yang Shi
  2018-01-17 12:21   ` Thomas Gleixner
  0 siblings, 1 reply; 8+ messages in thread
From: Yang Shi @ 2018-01-03 23:56 UTC (permalink / raw)
  To: longman, tglx; +Cc: Yang Shi, linux-kernel

There are nested loops on debug objects free path, sometimes it may take
over hundred thousands of loops, then cause soft lockup with !CONFIG_PREEMPT
occasionally, like below:

 NMI watchdog: BUG: soft lockup - CPU#15 stuck for 22s! [stress-ng-getde:110342]
 Modules linked in: binfmt_misc(E) tcp_diag(E)
inet_diag(E) bonding(E) intel_rapl(E) iosf_mbi(E)
x86_pkg_temp_thermal(E) coretemp(E) iTCO_wdt(E) iTCO_vendor_support(E)
kvm_intel(E) kvm(E) irqbypass(E) crct10dif_pclmul(E) crc32_pclmul(E)
dcdbas(E) ghash_clmulni_intel(E) aesni_intel(E) lrw(E) gf128mul(E)
glue_helper(E) ablk_helper(E) ipmi_devintf(E) sg(E) cryptd(E) pcspkr(E)
mei_me(E) lpc_ich(E) ipmi_si(E) mfd_core(E) mei(E) shpchp(E) wmi(E)
ipmi_msghandler(E) acpi_power_meter(E) nfsd(E) auth_rpcgss(E) nfs_acl(E)
lockd(E) grace(E) sunrpc(E) ip_tables(E) ext4(E) jbd2(E) mbcache(E)
sd_mod(E) mgag200(E) igb(E) drm_kms_helper(E) ixgbe(E) syscopyarea(E)
mdio(E) sysfillrect(E) sysimgblt(E) ptp(E) fb_sys_fops(E) pps_core(E)
ttm(E) drm(E) crc32c_intel(E) i2c_algo_bit(E) i2c_core(E)
megaraid_sas(E)
  dca(E)
 irq event stamp: 4340444
 hardirqs last  enabled at (4340443): [<ffffffff817d6476>]
_raw_spin_unlock_irqrestore+0x36/0x60
 hardirqs last disabled at (4340444): [<ffffffff817d8c81>]
apic_timer_interrupt+0x91/0xa0
 softirqs last  enabled at (4340398): [<ffffffff817da179>]
__do_softirq+0x349/0x50e
 softirqs last disabled at (4340391): [<ffffffff810a5255>]
irq_exit+0xf5/0x110
 CPU: 15 PID: 110342 Comm: stress-ng-getde Tainted: G
E   4.9.44-003.ali3000.alios7.x86_64.debug #1
 Hardware name: Dell Inc. PowerEdge R720xd/0X6FFV, BIOS
1.6.0 03/07/2013
 task: ffff884cbb0d0000 task.stack: ffff884cabc70000
 RIP: 0010:[<ffffffff817d647b>]  [<ffffffff817d647b>]
_raw_spin_unlock_irqrestore+0x3b/0x60
 RSP: 0018:ffff884cabc77b78  EFLAGS: 00000292
 RAX: ffff884cbb0d0000 RBX: 0000000000000292 RCX: 0000000000000000
 RDX: ffff884cbb0d0000 RSI: 0000000000000001 RDI: 0000000000000292
 RBP: ffff884cabc77b88 R08: 0000000000000000 R09: 0000000000000000
 R10: 0000000000000001 R11: 0000000000000001 R12: ffffffff8357a0d8
 R13: ffff884cabc77bc8 R14: ffffffff8357a0d0 R15: 00000000000000fc
 FS:  00002aee845fd2c0(0000) GS:ffff8852bd400000(0000) knlGS:0000000000000000
 CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
 CR2: 0000000002991808 CR3: 0000005123abf000 CR4: 00000000000406e0
 Stack:
  ffff884ff4fe0000 ffff884ff4fd8000 ffff884cabc77c00 ffffffff8141177e
  0000000000000202 ffff884cbb0d0000 ffff884cabc77bc8 0000000000000006
  ffff884ff4fda000 ffffffff8357a0d8 0000000000000000 91f5d976f6020b6c
 Call Trace:
  [<ffffffff8141177e>] debug_check_no_obj_freed+0x13e/0x220
  [<ffffffff811f8751>] __free_pages_ok+0x1f1/0x5c0
  [<ffffffff811fa785>] __free_pages+0x25/0x40
  [<ffffffff812638db>] __free_slab+0x19b/0x270
  [<ffffffff812639e9>] discard_slab+0x39/0x50
  [<ffffffff812679f7>] __slab_free+0x207/0x270
  [<ffffffff81269966>] ___cache_free+0xa6/0xb0
  [<ffffffff8126c267>] qlist_free_all+0x47/0x80
  [<ffffffff8126c5a9>] quarantine_reduce+0x159/0x190
  [<ffffffff8126b3bf>] kasan_kmalloc+0xaf/0xc0
  [<ffffffff8126b8a2>] kasan_slab_alloc+0x12/0x20
  [<ffffffff81265e8a>] kmem_cache_alloc+0xfa/0x360
  [<ffffffff812abc8f>] ? getname_flags+0x4f/0x1f0
  [<ffffffff812abc8f>] getname_flags+0x4f/0x1f0
  [<ffffffff812abe42>] getname+0x12/0x20
  [<ffffffff81298da9>] do_sys_open+0xf9/0x210
  [<ffffffff81298ede>] SyS_open+0x1e/0x20
  [<ffffffff817d6e01>] entry_SYSCALL_64_fastpath+0x1f/0xc2
 Code: 7f 18 53 48 8b 55 08 48 89 f3 be 01 00 00 00 e8 3c
cd 92 ff 4c 89 e7 e8 f4 0e 93 ff f6 c7 02 74 1b e8 3a ac 92 ff 48 89 df
57 9d <66> 66 90 66 90 65 ff 0d d1 ff 83 7e 5b 41 5c 5d c3 48 89 df 57

The code path might be called in either atomic or non-atomic context,
so touching softlockup watchdog instead of calling cond_resched() which
might fall asleep. However, it is unnecessary to touch the watchdog
every loop, so just touch the watchdog at every 10000 (best estimate) loops.

And, introduce a new knob: /sys/kernel/debug/debug_objects/suppress_lockup.
0 value means not suppress the softlockup message by touching the
watchdog, non-zero value means suppress the softlockup message.
The default value is zero.

Signed-off-by: Yang Shi <yang.s@alibaba-inc.com>
CC: Waiman Long <longman@redhat.com>
CC: Thomas Gleixner <tglx@linutronix.de>
---
v2 --> v3:
 * Use debugfs_create_u32() helper API per Waiman's suggestion
v1 --> v2:
 * Added suppress_lockup knob in debugfs per Waiman's suggestion

 lib/debugobjects.c | 16 ++++++++++++++--
 1 file changed, 14 insertions(+), 2 deletions(-)

diff --git a/lib/debugobjects.c b/lib/debugobjects.c
index 166488d..c960221 100644
--- a/lib/debugobjects.c
+++ b/lib/debugobjects.c
@@ -19,6 +19,7 @@
 #include <linux/slab.h>
 #include <linux/hash.h>
 #include <linux/kmemleak.h>
+#include <linux/nmi.h>
 
 #define ODEBUG_HASH_BITS	14
 #define ODEBUG_HASH_SIZE	(1 << ODEBUG_HASH_BITS)
@@ -67,6 +68,8 @@ struct debug_bucket {
 static int			debug_objects_allocated;
 static int			debug_objects_freed;
 
+static int			suppress_lockup;
+
 static void free_obj_work(struct work_struct *work);
 static DECLARE_WORK(debug_obj_work, free_obj_work);
 
@@ -768,6 +771,10 @@ static void __debug_check_no_obj_freed(const void *address, unsigned long size)
 			debug_objects_maxchain = cnt;
 
 		max_loops += cnt;
+
+		if (max_loops > 10000 && ((max_loops % 10000) == 0)
+		    && suppress_lockup != 0)
+			touch_softlockup_watchdog();
 	}
 
 	if (max_loops > debug_objects_maxloops)
@@ -812,7 +819,7 @@ static int debug_stats_open(struct inode *inode, struct file *filp)
 
 static int __init debug_objects_init_debugfs(void)
 {
-	struct dentry *dbgdir, *dbgstats;
+	struct dentry *dbgdir, *dbgstats, *dbglockup;
 
 	if (!debug_objects_enabled)
 		return 0;
@@ -826,10 +833,15 @@ static int __init debug_objects_init_debugfs(void)
 	if (!dbgstats)
 		goto err;
 
+	dbglockup = debugfs_create_u32("suppress_lockup", 0644, dbgdir,
+					&suppress_lockup);
+	if (!dbglockup)
+		goto err;
+
 	return 0;
 
 err:
-	debugfs_remove(dbgdir);
+	debugfs_remove_recursive(dbgdir);
 
 	return -ENOMEM;
 }
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH 2/2 v3] lib: debugobjects: touch watchdog to avoid softlockup when !CONFIG_PREEMPT
  2018-01-03 23:56 ` [PATCH 2/2 v3] lib: debugobjects: touch watchdog to avoid softlockup when !CONFIG_PREEMPT Yang Shi
@ 2018-01-17 12:21   ` Thomas Gleixner
  2018-01-18  1:41     ` Yang Shi
  0 siblings, 1 reply; 8+ messages in thread
From: Thomas Gleixner @ 2018-01-17 12:21 UTC (permalink / raw)
  To: Yang Shi; +Cc: longman, linux-kernel

On Thu, 4 Jan 2018, Yang Shi wrote:

> There are nested loops on debug objects free path, sometimes it may take
> over hundred thousands of loops, then cause soft lockup with !CONFIG_PREEMPT
> occasionally, like below:

Please trim back traces. The whole module info and whatever is completely
irrelevant.

> @@ -768,6 +771,10 @@ static void __debug_check_no_obj_freed(const void *address, unsigned long size)
>  			debug_objects_maxchain = cnt;
>  
>  		max_loops += cnt;
> +
> +		if (max_loops > 10000 && ((max_loops % 10000) == 0)
> +		    && suppress_lockup != 0)
> +			touch_softlockup_watchdog();

This is voodoo programming.

There are two things which can be done here:

 1) The collected objects can be put on a global free list and work
    scheduled to free them piecewise.

 2) We can do a cond_resched() if not in atomic context and interrupts are
    enabled.

I rather prefer to make that stuff robust than having crystal ball
constants and magic debugfs knobs.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 2/2 v3] lib: debugobjects: touch watchdog to avoid softlockup when !CONFIG_PREEMPT
  2018-01-17 12:21   ` Thomas Gleixner
@ 2018-01-18  1:41     ` Yang Shi
  2018-01-18  9:02       ` Thomas Gleixner
  0 siblings, 1 reply; 8+ messages in thread
From: Yang Shi @ 2018-01-18  1:41 UTC (permalink / raw)
  To: Thomas Gleixner; +Cc: longman, linux-kernel



On 1/17/18 4:21 AM, Thomas Gleixner wrote:
> On Thu, 4 Jan 2018, Yang Shi wrote:
> 
>> There are nested loops on debug objects free path, sometimes it may take
>> over hundred thousands of loops, then cause soft lockup with !CONFIG_PREEMPT
>> occasionally, like below:
> 
> Please trim back traces. The whole module info and whatever is completely
> irrelevant.

Yes, sure.

> 
>> @@ -768,6 +771,10 @@ static void __debug_check_no_obj_freed(const void *address, unsigned long size)
>>   			debug_objects_maxchain = cnt;
>>   
>>   		max_loops += cnt;
>> +
>> +		if (max_loops > 10000 && ((max_loops % 10000) == 0)
>> +		    && suppress_lockup != 0)
>> +			touch_softlockup_watchdog();
> 
> This is voodoo programming.
> 
> There are two things which can be done here:
> 
>   1) The collected objects can be put on a global free list and work
>      scheduled to free them piecewise.

I don't get your point here. objects free has already been done in a 
work. free_object() -> schedule_work()

Do you mean free all of them out of the for loop in a batch? Then don't 
call free_object() in the for loop?

> 
>   2) We can do a cond_resched() if not in atomic context and interrupts are
>      enabled.

I did try this before I went with touching softlockup watchdog approach. 
The problem is in_atomic() can't tell if it is in atomic context on 
non-preempt kernel. For preempt kernel, it is easy.

Thanks,
Yang

> 
> I rather prefer to make that stuff robust than having crystal ball
> constants and magic debugfs knobs.
> 
> Thanks,
> 
> 	tglx
> 

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 2/2 v3] lib: debugobjects: touch watchdog to avoid softlockup when !CONFIG_PREEMPT
  2018-01-18  1:41     ` Yang Shi
@ 2018-01-18  9:02       ` Thomas Gleixner
  2018-01-18 15:05         ` Peter Zijlstra
  2018-01-20  0:06         ` Yang Shi
  0 siblings, 2 replies; 8+ messages in thread
From: Thomas Gleixner @ 2018-01-18  9:02 UTC (permalink / raw)
  To: Yang Shi; +Cc: longman, LKML, Peter Zijlstra

On Thu, 18 Jan 2018, Yang Shi wrote:
> On 1/17/18 4:21 AM, Thomas Gleixner wrote:
> > There are two things which can be done here:
> > 
> >   1) The collected objects can be put on a global free list and work
> >      scheduled to free them piecewise.
> 
> I don't get your point here. objects free has already been done in a work.
> free_object() -> schedule_work()

But it's doing it in a loop for each freed object.

> Do you mean free all of them out of the for loop in a batch? Then don't call
> free_object() in the for loop?

Somehing like this:

diff --git a/lib/debugobjects.c b/lib/debugobjects.c
index 2f5349c6e81a..d36940cdc658 100644
--- a/lib/debugobjects.c
+++ b/lib/debugobjects.c
@@ -42,6 +42,7 @@ static struct debug_obj		obj_static_pool[ODEBUG_POOL_SIZE] __initdata;
 static DEFINE_RAW_SPINLOCK(pool_lock);
 
 static HLIST_HEAD(obj_pool);
+static HLIST_HEAD(obj_to_free);
 
 static int			obj_pool_min_free = ODEBUG_POOL_SIZE;
 static int			obj_pool_free = ODEBUG_POOL_SIZE;
@@ -714,13 +715,12 @@ EXPORT_SYMBOL_GPL(debug_object_active_state);
 static void __debug_check_no_obj_freed(const void *address, unsigned long size)
 {
 	unsigned long flags, oaddr, saddr, eaddr, paddr, chunks;
-	struct hlist_node *tmp;
-	HLIST_HEAD(freelist);
 	struct debug_obj_descr *descr;
 	enum debug_obj_state state;
 	struct debug_bucket *db;
+	struct hlist_node *tmp;
 	struct debug_obj *obj;
-	int cnt;
+	int cnt, freed = 0;
 
 	saddr = (unsigned long) address;
 	eaddr = saddr + size;
@@ -751,21 +751,22 @@ static void __debug_check_no_obj_freed(const void *address, unsigned long size)
 				goto repeat;
 			default:
 				hlist_del(&obj->node);
-				hlist_add_head(&obj->node, &freelist);
+				/* Put them on the freelist */
+				raw_spin_lock_irqsave(&pool_lock, flags);
+				hlist_add_head(&obj->node, &obj_to_free);
+				raw_spin_lock_irqrestore(&pool_lock, flags);
+				freed++;
 				break;
 			}
 		}
 		raw_spin_unlock_irqrestore(&db->lock, flags);
 
-		/* Now free them */
-		hlist_for_each_entry_safe(obj, tmp, &freelist, node) {
-			hlist_del(&obj->node);
-			free_object(obj);
-		}
-
 		if (cnt > debug_objects_maxchain)
 			debug_objects_maxchain = cnt;
 	}
+
+	if (freed)
+		schedule_work(.....);

The allocation side can look at the free list as well and grab objects from
there if the pool level is low if that happens before the work can do that.


> > 
> >   2) We can do a cond_resched() if not in atomic context and interrupts are
> >      enabled.
> 
> I did try this before I went with touching softlockup watchdog approach. The
> problem is in_atomic() can't tell if it is in atomic context on non-preempt
> kernel. For preempt kernel, it is easy.

Peter, can we do anything about that?

Thanks,

	tglx

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH 2/2 v3] lib: debugobjects: touch watchdog to avoid softlockup when !CONFIG_PREEMPT
  2018-01-18  9:02       ` Thomas Gleixner
@ 2018-01-18 15:05         ` Peter Zijlstra
  2018-01-18 15:15           ` Thomas Gleixner
  2018-01-20  0:06         ` Yang Shi
  1 sibling, 1 reply; 8+ messages in thread
From: Peter Zijlstra @ 2018-01-18 15:05 UTC (permalink / raw)
  To: Thomas Gleixner; +Cc: Yang Shi, longman, LKML

On Thu, Jan 18, 2018 at 10:02:25AM +0100, Thomas Gleixner wrote:
> > >   2) We can do a cond_resched() if not in atomic context and interrupts are
> > >      enabled.
> > 
> > I did try this before I went with touching softlockup watchdog approach. The
> > problem is in_atomic() can't tell if it is in atomic context on non-preempt
> > kernel. For preempt kernel, it is easy.
> 
> Peter, can we do anything about that?

Not really :/ Take for instance a spinlock on !PREEMPT, only the lock
variable is touched, preempt_count isn't. So we cannot know.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 2/2 v3] lib: debugobjects: touch watchdog to avoid softlockup when !CONFIG_PREEMPT
  2018-01-18 15:05         ` Peter Zijlstra
@ 2018-01-18 15:15           ` Thomas Gleixner
  0 siblings, 0 replies; 8+ messages in thread
From: Thomas Gleixner @ 2018-01-18 15:15 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: Yang Shi, longman, LKML

On Thu, 18 Jan 2018, Peter Zijlstra wrote:

> On Thu, Jan 18, 2018 at 10:02:25AM +0100, Thomas Gleixner wrote:
> > > >   2) We can do a cond_resched() if not in atomic context and interrupts are
> > > >      enabled.
> > > 
> > > I did try this before I went with touching softlockup watchdog approach. The
> > > problem is in_atomic() can't tell if it is in atomic context on non-preempt
> > > kernel. For preempt kernel, it is easy.
> > 
> > Peter, can we do anything about that?
> 
> Not really :/ Take for instance a spinlock on !PREEMPT, only the lock
> variable is touched, preempt_count isn't. So we cannot know.

Fair enough.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 2/2 v3] lib: debugobjects: touch watchdog to avoid softlockup when !CONFIG_PREEMPT
  2018-01-18  9:02       ` Thomas Gleixner
  2018-01-18 15:05         ` Peter Zijlstra
@ 2018-01-20  0:06         ` Yang Shi
  1 sibling, 0 replies; 8+ messages in thread
From: Yang Shi @ 2018-01-20  0:06 UTC (permalink / raw)
  To: Thomas Gleixner; +Cc: longman, LKML, Peter Zijlstra



On 1/18/18 1:02 AM, Thomas Gleixner wrote:
> On Thu, 18 Jan 2018, Yang Shi wrote:
>> On 1/17/18 4:21 AM, Thomas Gleixner wrote:
>>> There are two things which can be done here:
>>>
>>>    1) The collected objects can be put on a global free list and work
>>>       scheduled to free them piecewise.
>>
>> I don't get your point here. objects free has already been done in a work.
>> free_object() -> schedule_work()
> 
> But it's doing it in a loop for each freed object.
> 
>> Do you mean free all of them out of the for loop in a batch? Then don't call
>> free_object() in the for loop?
> 
> Somehing like this:
> 
> diff --git a/lib/debugobjects.c b/lib/debugobjects.c
> index 2f5349c6e81a..d36940cdc658 100644
> --- a/lib/debugobjects.c
> +++ b/lib/debugobjects.c
> @@ -42,6 +42,7 @@ static struct debug_obj		obj_static_pool[ODEBUG_POOL_SIZE] __initdata;
>   static DEFINE_RAW_SPINLOCK(pool_lock);
>   
>   static HLIST_HEAD(obj_pool);
> +static HLIST_HEAD(obj_to_free);
>   
>   static int			obj_pool_min_free = ODEBUG_POOL_SIZE;
>   static int			obj_pool_free = ODEBUG_POOL_SIZE;
> @@ -714,13 +715,12 @@ EXPORT_SYMBOL_GPL(debug_object_active_state);
>   static void __debug_check_no_obj_freed(const void *address, unsigned long size)
>   {
>   	unsigned long flags, oaddr, saddr, eaddr, paddr, chunks;
> -	struct hlist_node *tmp;
> -	HLIST_HEAD(freelist);
>   	struct debug_obj_descr *descr;
>   	enum debug_obj_state state;
>   	struct debug_bucket *db;
> +	struct hlist_node *tmp;
>   	struct debug_obj *obj;
> -	int cnt;
> +	int cnt, freed = 0;
>   
>   	saddr = (unsigned long) address;
>   	eaddr = saddr + size;
> @@ -751,21 +751,22 @@ static void __debug_check_no_obj_freed(const void *address, unsigned long size)
>   				goto repeat;
>   			default:
>   				hlist_del(&obj->node);
> -				hlist_add_head(&obj->node, &freelist);
> +				/* Put them on the freelist */
> +				raw_spin_lock_irqsave(&pool_lock, flags);
> +				hlist_add_head(&obj->node, &obj_to_free);
> +				raw_spin_lock_irqrestore(&pool_lock, flags);
> +				freed++;
>   				break;
>   			}
>   		}
>   		raw_spin_unlock_irqrestore(&db->lock, flags);
>   
> -		/* Now free them */
> -		hlist_for_each_entry_safe(obj, tmp, &freelist, node) {
> -			hlist_del(&obj->node);
> -			free_object(obj);
> -		}
> -
>   		if (cnt > debug_objects_maxchain)
>   			debug_objects_maxchain = cnt;
>   	}
> +
> +	if (freed)
> +		schedule_work(.....);
> 
> The allocation side can look at the free list as well and grab objects from
> there if the pool level is low if that happens before the work can do that.

Thanks a lot for the example. I'm working on a preliminary patch, will 
send out for review once I finish my tests.

> 
> 
>>>
>>>    2) We can do a cond_resched() if not in atomic context and interrupts are
>>>       enabled.
>>
>> I did try this before I went with touching softlockup watchdog approach. The
>> problem is in_atomic() can't tell if it is in atomic context on non-preempt
>> kernel. For preempt kernel, it is easy.
> 
> Peter, can we do anything about that?

As Peter confirmed, in_atomic() is not functional on !PREEMPT kernel, 
shall we still use touching softlockup watchdog approach?

Thanks,
Yang

> 
> Thanks,
> 
> 	tglx
> 

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2018-01-20  0:07 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-01-03 23:56 [PATCH 1/2 v3] lib: debugobjects: export max loops counter Yang Shi
2018-01-03 23:56 ` [PATCH 2/2 v3] lib: debugobjects: touch watchdog to avoid softlockup when !CONFIG_PREEMPT Yang Shi
2018-01-17 12:21   ` Thomas Gleixner
2018-01-18  1:41     ` Yang Shi
2018-01-18  9:02       ` Thomas Gleixner
2018-01-18 15:05         ` Peter Zijlstra
2018-01-18 15:15           ` Thomas Gleixner
2018-01-20  0:06         ` Yang Shi

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).