linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* ksm/memory hotplug: lockdep warning for ksm_thread_mutex vs. (memory_chain).rwsem
@ 2012-02-02 16:13 Gerald Schaefer
  2012-02-02 23:00 ` KOSAKI Motohiro
  0 siblings, 1 reply; 4+ messages in thread
From: Gerald Schaefer @ 2012-02-02 16:13 UTC (permalink / raw)
  To: KOSAKI Motohiro, Hugh Dickins
  Cc: linux-mm, linux-kernel, Martin Schwidefsky, Heiko Carstens,
	Andrea Arcangeli, Chris Wright, Izik Eidus, KAMEZAWA Hiroyuki

Setting a memory block offline triggers the following lockdep warning. This
looks exactly like the issue reported by Kosaki Motohiro in
https://lkml.org/lkml/2010/10/25/110. Seems like the resulting commit a0b0f58cdd
did not fix the lockdep warning. I'm able to reproduce it with current 3.3.0-rc2
as well as 2.6.37-rc4-00147-ga0b0f58.

I'm not familiar with lockdep annotations, but I tried using down_read_nested()
for (memory_chain).rwsem, similar to the mutex_lock_nested() which was
introduced for ksm_thread_mutex, but that didn't help.


======================================================
[ INFO: possible circular locking dependency detected ]
3.3.0-rc2 #8 Not tainted
-------------------------------------------------------
sh/973 is trying to acquire lock:
 ((memory_chain).rwsem){.+.+.+}, at: [<000000000015b0e4>] __blocking_notifier_call_chain+0x40/0x8c

but task is already holding lock:
 (ksm_thread_mutex/1){+.+.+.}, at: [<0000000000247484>] ksm_memory_callback+0x48/0xd0

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #1 (ksm_thread_mutex/1){+.+.+.}:
       [<0000000000195746>] __lock_acquire+0x47a/0xbd4
       [<00000000001964b6>] lock_acquire+0xc2/0x148
       [<00000000005dba62>] mutex_lock_nested+0x5a/0x354
       [<0000000000247484>] ksm_memory_callback+0x48/0xd0
       [<00000000005e1d4e>] notifier_call_chain+0x52/0x9c
       [<000000000015b0fa>] __blocking_notifier_call_chain+0x56/0x8c
       [<000000000015b15a>] blocking_notifier_call_chain+0x2a/0x3c
       [<00000000005d116e>] offline_pages.clone.21+0x17a/0x6f0
       [<000000000046363a>] memory_block_change_state+0x172/0x2f4
       [<0000000000463876>] store_mem_state+0xba/0xf0
       [<00000000002e1592>] sysfs_write_file+0xf6/0x1a8
       [<0000000000260d94>] vfs_write+0xb0/0x18c
       [<0000000000261108>] SyS_write+0x58/0xb4
       [<00000000005dfab8>] sysc_noemu+0x22/0x28
       [<000003fffcfa46c0>] 0x3fffcfa46c0

-> #0 ((memory_chain).rwsem){.+.+.+}:
       [<00000000001946ee>] validate_chain.clone.24+0x1106/0x11b4
       [<0000000000195746>] __lock_acquire+0x47a/0xbd4
       [<00000000001964b6>] lock_acquire+0xc2/0x148
       [<00000000005dc30e>] down_read+0x4a/0x88
       [<000000000015b0e4>] __blocking_notifier_call_chain+0x40/0x8c
       [<000000000015b15a>] blocking_notifier_call_chain+0x2a/0x3c
       [<00000000005d16be>] offline_pages.clone.21+0x6ca/0x6f0
       [<000000000046363a>] memory_block_change_state+0x172/0x2f4
       [<0000000000463876>] store_mem_state+0xba/0xf0
       [<00000000002e1592>] sysfs_write_file+0xf6/0x1a8
       [<0000000000260d94>] vfs_write+0xb0/0x18c
       [<0000000000261108>] SyS_write+0x58/0xb4
       [<00000000005dfab8>] sysc_noemu+0x22/0x28
       [<000003fffcfa46c0>] 0x3fffcfa46c0

other info that might help us debug this:

 Possible unsafe locking scenario:

       CPU0                    CPU1
       ----                    ----
  lock(ksm_thread_mutex/1);
                               lock((memory_chain).rwsem);
                               lock(ksm_thread_mutex/1);
  lock((memory_chain).rwsem);

 *** DEADLOCK ***

6 locks held by sh/973:
 #0:  (&buffer->mutex){+.+.+.}, at: [<00000000002e14e6>] sysfs_write_file+0x4a/0x1a8
 #1:  (s_active#53){.+.+.+}, at: [<00000000002e156e>] sysfs_write_file+0xd2/0x1a8
 #2:  (&mem->state_mutex){+.+.+.}, at: [<000000000046350a>] memory_block_change_state+0x42/0x2f4
 #3:  (mem_hotplug_mutex){+.+.+.}, at: [<0000000000252e30>] lock_memory_hotplug+0x2c/0x4c
 #4:  (pm_mutex#2){+.+.+.}, at: [<00000000005d10ea>] offline_pages.clone.21+0xf6/0x6f0
 #5:  (ksm_thread_mutex/1){+.+.+.}, at: [<0000000000247484>] ksm_memory_callback+0x48/0xd0

stack backtrace:
CPU: 1 Not tainted 3.3.0-rc2 #8
Process sh (pid: 973, task: 000000003ecb8000, ksp: 000000003b24b898)
000000003b24b930 000000003b24b8b0 0000000000000002 0000000000000000.
       000000003b24b950 000000003b24b8c8 000000003b24b8c8 00000000005da66a.
       0000000000000000 0000000000000000 000000003b24ba08 000000003ecb8000.
       000000000000000d 000000000000000c 000000003b24b918 0000000000000000.
       0000000000000000 0000000000100af8 000000003b24b8b0 000000003b24b8f0.
Call Trace:
([<0000000000100a06>] show_trace+0xee/0x144)
 [<0000000000192564>] print_circular_bug+0x220/0x328
 [<00000000001946ee>] validate_chain.clone.24+0x1106/0x11b4
 [<0000000000195746>] __lock_acquire+0x47a/0xbd4
 [<00000000001964b6>] lock_acquire+0xc2/0x148
 [<00000000005dc30e>] down_read+0x4a/0x88
 [<000000000015b0e4>] __blocking_notifier_call_chain+0x40/0x8c
 [<000000000015b15a>] blocking_notifier_call_chain+0x2a/0x3c
 [<00000000005d16be>] offline_pages.clone.21+0x6ca/0x6f0
 [<000000000046363a>] memory_block_change_state+0x172/0x2f4
 [<0000000000463876>] store_mem_state+0xba/0xf0
 [<00000000002e1592>] sysfs_write_file+0xf6/0x1a8
 [<0000000000260d94>] vfs_write+0xb0/0x18c
 [<0000000000261108>] SyS_write+0x58/0xb4
 [<00000000005dfab8>] sysc_noemu+0x22/0x28
 [<000003fffcfa46c0>] 0x3fffcfa46c0
INFO: lockdep is turned off.


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: ksm/memory hotplug: lockdep warning for ksm_thread_mutex vs. (memory_chain).rwsem
  2012-02-02 16:13 ksm/memory hotplug: lockdep warning for ksm_thread_mutex vs. (memory_chain).rwsem Gerald Schaefer
@ 2012-02-02 23:00 ` KOSAKI Motohiro
  2012-02-03 15:37   ` Gerald Schaefer
  2012-07-16 12:49   ` Gerald Schaefer
  0 siblings, 2 replies; 4+ messages in thread
From: KOSAKI Motohiro @ 2012-02-02 23:00 UTC (permalink / raw)
  To: gerald.schaefer
  Cc: Hugh Dickins, linux-mm, linux-kernel, Martin Schwidefsky,
	Heiko Carstens, Andrea Arcangeli, Chris Wright, Izik Eidus,
	KAMEZAWA Hiroyuki

2012/2/2 Gerald Schaefer <gerald.schaefer@de.ibm.com>:
> Setting a memory block offline triggers the following lockdep warning. This
> looks exactly like the issue reported by Kosaki Motohiro in
> https://lkml.org/lkml/2010/10/25/110. Seems like the resulting commit a0b0f58cdd
> did not fix the lockdep warning. I'm able to reproduce it with current 3.3.0-rc2
> as well as 2.6.37-rc4-00147-ga0b0f58.
>
> I'm not familiar with lockdep annotations, but I tried using down_read_nested()
> for (memory_chain).rwsem, similar to the mutex_lock_nested() which was
> introduced for ksm_thread_mutex, but that didn't help.

Heh, interesting. Simple question, do you have any user visible buggy
behavior? or just false positive warn issue?

*_nested() is just hacky trick. so, any change may break their lie.
Anyway I'd like to dig this one. thanks for reporting.

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: ksm/memory hotplug: lockdep warning for ksm_thread_mutex vs. (memory_chain).rwsem
  2012-02-02 23:00 ` KOSAKI Motohiro
@ 2012-02-03 15:37   ` Gerald Schaefer
  2012-07-16 12:49   ` Gerald Schaefer
  1 sibling, 0 replies; 4+ messages in thread
From: Gerald Schaefer @ 2012-02-03 15:37 UTC (permalink / raw)
  To: KOSAKI Motohiro
  Cc: Hugh Dickins, linux-mm, linux-kernel, Martin Schwidefsky,
	Heiko Carstens, Andrea Arcangeli, Chris Wright, Izik Eidus,
	KAMEZAWA Hiroyuki

On 03.02.2012 00:00, KOSAKI Motohiro wrote:
> 2012/2/2 Gerald Schaefer<gerald.schaefer@de.ibm.com>:
>> Setting a memory block offline triggers the following lockdep warning. This
>> looks exactly like the issue reported by Kosaki Motohiro in
>> https://lkml.org/lkml/2010/10/25/110. Seems like the resulting commit a0b0f58cdd
>> did not fix the lockdep warning. I'm able to reproduce it with current 3.3.0-rc2
>> as well as 2.6.37-rc4-00147-ga0b0f58.
>>
>> I'm not familiar with lockdep annotations, but I tried using down_read_nested()
>> for (memory_chain).rwsem, similar to the mutex_lock_nested() which was
>> introduced for ksm_thread_mutex, but that didn't help.
> 
> Heh, interesting. Simple question, do you have any user visible buggy
> behavior? or just false positive warn issue?
> 
> *_nested() is just hacky trick. so, any change may break their lie.
> Anyway I'd like to dig this one. thanks for reporting.

There is no real deadlock and no user visible buggy behaviour, the memory is
being offlined as requested. I think your conclusion from last time is still
valid, that both locks are inside mem_hotplug_mutex and there can't be a
deadlock. Question is how to convince lockdep of this.


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: ksm/memory hotplug: lockdep warning for ksm_thread_mutex vs. (memory_chain).rwsem
  2012-02-02 23:00 ` KOSAKI Motohiro
  2012-02-03 15:37   ` Gerald Schaefer
@ 2012-07-16 12:49   ` Gerald Schaefer
  1 sibling, 0 replies; 4+ messages in thread
From: Gerald Schaefer @ 2012-07-16 12:49 UTC (permalink / raw)
  To: KOSAKI Motohiro
  Cc: Hugh Dickins, linux-mm, linux-kernel, Martin Schwidefsky,
	Heiko Carstens, Andrea Arcangeli, Peter Zijlstra,
	KAMEZAWA Hiroyuki

On Thu, 2 Feb 2012 18:00:45 -0500
KOSAKI Motohiro <kosaki.motohiro@gmail.com> wrote:

> 2012/2/2 Gerald Schaefer <gerald.schaefer@de.ibm.com>:
> > Setting a memory block offline triggers the following lockdep
> > warning. This looks exactly like the issue reported by Kosaki
> > Motohiro in https://lkml.org/lkml/2010/10/25/110. Seems like the
> > resulting commit a0b0f58cdd did not fix the lockdep warning. I'm
> > able to reproduce it with current 3.3.0-rc2 as well as
> > 2.6.37-rc4-00147-ga0b0f58.
> >
> > I'm not familiar with lockdep annotations, but I tried using
> > down_read_nested() for (memory_chain).rwsem, similar to the
> > mutex_lock_nested() which was introduced for ksm_thread_mutex, but
> > that didn't help.
> 
> Heh, interesting. Simple question, do you have any user visible buggy
> behavior? or just false positive warn issue?
> 
> *_nested() is just hacky trick. so, any change may break their lie.
> Anyway I'd like to dig this one. thanks for reporting.

Hi,

any news on this? I'm still getting test reports about the lockdep
warning: the problem is still present in 3.5.0-rc7 and it still looks
like a false-positive to me (both locks inside mem_hotplug_mutex, so
there can't be a deadlock, see also comment in mm/ksm.c). Any ideas how
to convince lockdep of that, so that we can run memory hotplug tests
again with lockdep enabled?

======================================================
[ INFO: possible circular locking dependency detected ]
3.5.0-rc7 #40 Not tainted
-------------------------------------------------------
sh/698 is trying to acquire lock:
 ((memory_chain).rwsem){.+.+.+}, at: [<0000000000165372>] __blocking_notifier_call_chain+0x5e/0xe0

but task is already holding lock:
 (ksm_thread_mutex/1){+.+.+.}, at: [<000000000026a654>] ksm_memory_callback+0x48/0xd8

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #1 (ksm_thread_mutex/1){+.+.+.}:
       [<00000000001a5b42>] __lock_acquire+0x3f6/0xb28
       [<00000000001a69d2>] lock_acquire+0xbe/0x250
       [<000000000064cefc>] mutex_lock_nested+0x98/0x3f4
       [<000000000026a654>] ksm_memory_callback+0x48/0xd8
       [<0000000000653af4>] notifier_call_chain+0x8c/0x174
       [<0000000000165388>] __blocking_notifier_call_chain+0x74/0xe0
       [<000000000016541e>] blocking_notifier_call_chain+0x2a/0x3c
       [<0000000000638cb4>] offline_pages.constprop.1+0x17c/0x740
       [<00000000004b5326>] memory_block_change_state+0x2aa/0x328
       [<00000000004b545e>] store_mem_state+0xba/0xf0
       [<000000000030b5f2>] sysfs_write_file+0xf6/0x1a8
       [<0000000000283a3a>] vfs_write+0x9a/0x184
       [<0000000000283d94>] SyS_write+0x58/0x94
       [<00000000006514f4>] sysc_noemu+0x22/0x28
       [<000003fffd5aa3e8>] 0x3fffd5aa3e8

-> #0 ((memory_chain).rwsem){.+.+.+}:
       [<00000000001a22bc>] validate_chain+0x880/0x1154
       [<00000000001a5b42>] __lock_acquire+0x3f6/0xb28
       [<00000000001a69d2>] lock_acquire+0xbe/0x250
       [<000000000064d8a6>] down_read+0x66/0xdc
       [<0000000000165372>] __blocking_notifier_call_chain+0x5e/0xe0
       [<000000000016541e>] blocking_notifier_call_chain+0x2a/0x3c
       [<0000000000638d00>] offline_pages.constprop.1+0x1c8/0x740
       [<00000000004b5326>] memory_block_change_state+0x2aa/0x328
       [<00000000004b545e>] store_mem_state+0xba/0xf0
       [<000000000030b5f2>] sysfs_write_file+0xf6/0x1a8
       [<0000000000283a3a>] vfs_write+0x9a/0x184
       [<0000000000283d94>] SyS_write+0x58/0x94
       [<00000000006514f4>] sysc_noemu+0x22/0x28
       [<000003fffd5aa3e8>] 0x3fffd5aa3e8

other info that might help us debug this:

 Possible unsafe locking scenario:

       CPU0                    CPU1
       ----                    ----
  lock(ksm_thread_mutex/1);
                               lock((memory_chain).rwsem);
                               lock(ksm_thread_mutex/1);
  lock((memory_chain).rwsem);

 *** DEADLOCK ***

6 locks held by sh/698:
 #0:  (&buffer->mutex){+.+.+.}, at: [<000000000030b546>] sysfs_write_file+0x4a/0x1a8
 #1:  (s_active#31){.+.+.+}, at: [<000000000030b5ce>] sysfs_write_file+0xd2/0x1a8
 #2:  (&mem->state_mutex){+.+.+.}, at: [<00000000004b50be>] memory_block_change_state+0x42/0x328
 #3:  (mem_hotplug_mutex){+.+.+.}, at: [<0000000000275324>] lock_memory_hotplug+0x2c/0x4c
 #4:  (pm_mutex){+.+.+.}, at: [<0000000000638c22>] offline_pages.constprop.1+0xea/0x740
 #5:  (ksm_thread_mutex/1){+.+.+.}, at: [<000000000026a654>] ksm_memory_callback+0x48/0xd8

stack backtrace:
CPU: 1 Not tainted 3.5.0-rc7 #40
Process sh (pid: 698, task: 00000000d6b74850, ksp: 00000000d71c7ac0)
       00000000d71c78d8 00000000d71c78e8 0000000000000002 0000000000000000 
       00000000d71c7978 00000000d71c78f0 00000000d71c78f0 00000000001009e0 
       0000000000000000 0000000000000001 000000000000000b 000000000000000b 
       00000000d71c7938 00000000d71c78d8 0000000000000000 0000000000000000 
       0000000000660768 00000000001009e0 00000000d71c78d8 00000000d71c7928 
Call Trace:
([<00000000001008e6>] show_trace+0xee/0x144)
 [<000000000064436e>] print_circular_bug+0x2ee/0x300
 [<00000000001a22bc>] validate_chain+0x880/0x1154
 [<00000000001a5b42>] __lock_acquire+0x3f6/0xb28
 [<00000000001a69d2>] lock_acquire+0xbe/0x250
 [<000000000064d8a6>] down_read+0x66/0xdc
 [<0000000000165372>] __blocking_notifier_call_chain+0x5e/0xe0
 [<000000000016541e>] blocking_notifier_call_chain+0x2a/0x3c
 [<0000000000638d00>] offline_pages.constprop.1+0x1c8/0x740
 [<00000000004b5326>] memory_block_change_state+0x2aa/0x328
 [<00000000004b545e>] store_mem_state+0xba/0xf0
 [<000000000030b5f2>] sysfs_write_file+0xf6/0x1a8
 [<0000000000283a3a>] vfs_write+0x9a/0x184
 [<0000000000283d94>] SyS_write+0x58/0x94
 [<00000000006514f4>] sysc_noemu+0x22/0x28
 [<000003fffd5aa3e8>] 0x3fffd5aa3e8
INFO: lockdep is turned off.


^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2012-07-16 12:50 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-02-02 16:13 ksm/memory hotplug: lockdep warning for ksm_thread_mutex vs. (memory_chain).rwsem Gerald Schaefer
2012-02-02 23:00 ` KOSAKI Motohiro
2012-02-03 15:37   ` Gerald Schaefer
2012-07-16 12:49   ` Gerald Schaefer

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).