All of lore.kernel.org
 help / color / mirror / Atom feed
* [linux-next / tty] possible circular locking dependency detected
@ 2017-05-22  7:39 Sergey Senozhatsky
  2017-05-22 10:24 ` Greg Kroah-Hartman
  2017-05-30 10:47 ` Geert Uytterhoeven
  0 siblings, 2 replies; 9+ messages in thread
From: Sergey Senozhatsky @ 2017-05-22  7:39 UTC (permalink / raw)
  To: Greg Kroah-Hartman
  Cc: Vegard Nossum, Jiri Slaby, linux-kernel, Sergey Senozhatsky

Hello,

[ 1274.378287] ======================================================
[ 1274.378289] WARNING: possible circular locking dependency detected
[ 1274.378290] 4.12.0-rc1-next-20170522-dbg-00007-gc09b2ab28b74-dirty #1317 Not tainted
[ 1274.378291] ------------------------------------------------------
[ 1274.378293] kworker/u8:5/111 is trying to acquire lock:
[ 1274.378294]  (&buf->lock){+.+...}, at: [<ffffffff812f2831>] tty_buffer_flush+0x34/0x88
[ 1274.378300] 
               but task is already holding lock:
[ 1274.378301]  (&o_tty->termios_rwsem/1){++++..}, at: [<ffffffff812ee5c7>] isig+0x47/0xd2
[ 1274.378307] 
               which lock already depends on the new lock.

[ 1274.378309] 
               the existing dependency chain (in reverse order) is:
[ 1274.378310] 
               -> #2 (&o_tty->termios_rwsem/1){++++..}:
[ 1274.378316]        lock_acquire+0x183/0x1ae
[ 1274.378319]        down_read+0x3e/0x62
[ 1274.378321]        n_tty_write+0x6c/0x3d6
[ 1274.378322]        tty_write+0x1cc/0x25f
[ 1274.378325]        __vfs_write+0x26/0xec
[ 1274.378327]        vfs_write+0xe1/0x16a
[ 1274.378329]        SyS_write+0x51/0x8e
[ 1274.378330]        entry_SYSCALL_64_fastpath+0x18/0xad
[ 1274.378331] 
               -> #1 (&tty->atomic_write_lock){+.+.+.}:
[ 1274.378335]        lock_acquire+0x183/0x1ae
[ 1274.378337]        __mutex_lock+0x95/0x7ba
[ 1274.378339]        mutex_lock_nested+0x1b/0x1d
[ 1274.378340]        tty_port_default_receive_buf+0x4e/0x81
[ 1274.378342]        flush_to_ldisc+0x87/0xa1
[ 1274.378345]        process_one_work+0x2be/0x52b
[ 1274.378346]        worker_thread+0x1f3/0x2c5
[ 1274.378349]        kthread+0x131/0x139
[ 1274.378350]        ret_from_fork+0x2e/0x40
[ 1274.378351] 
               -> #0 (&buf->lock){+.+...}:
[ 1274.378355]        __lock_acquire+0xec4/0x1444
[ 1274.378357]        lock_acquire+0x183/0x1ae
[ 1274.378358]        __mutex_lock+0x95/0x7ba
[ 1274.378360]        mutex_lock_nested+0x1b/0x1d
[ 1274.378362]        tty_buffer_flush+0x34/0x88
[ 1274.378364]        pty_flush_buffer+0x27/0x70
[ 1274.378366]        tty_driver_flush_buffer+0x1b/0x1e
[ 1274.378367]        isig+0x9b/0xd2
[ 1274.378369]        n_tty_receive_signal_char+0x1c/0x59
[ 1274.378371]        n_tty_receive_char_special+0xa4/0x740
[ 1274.378373]        n_tty_receive_buf_common+0x452/0x810
[ 1274.378374]        n_tty_receive_buf2+0x14/0x16
[ 1274.378376]        tty_ldisc_receive_buf+0x1f/0x4a
[ 1274.378377]        tty_port_default_receive_buf+0x5f/0x81
[ 1274.378379]        flush_to_ldisc+0x87/0xa1
[ 1274.378380]        process_one_work+0x2be/0x52b
[ 1274.378382]        worker_thread+0x1f3/0x2c5
[ 1274.378383]        kthread+0x131/0x139
[ 1274.378385]        ret_from_fork+0x2e/0x40
[ 1274.378386] 
               other info that might help us debug this:

[ 1274.378387] Chain exists of:
                 &buf->lock --> &tty->atomic_write_lock --> &o_tty->termios_rwsem/1

[ 1274.378392]  Possible unsafe locking scenario:

[ 1274.378393]        CPU0                    CPU1
[ 1274.378394]        ----                    ----
[ 1274.378394]   lock(&o_tty->termios_rwsem/1);
[ 1274.378397]                                lock(&tty->atomic_write_lock);
[ 1274.378399]                                lock(&o_tty->termios_rwsem/1);
[ 1274.378402]   lock(&buf->lock);
[ 1274.378403] 
                *** DEADLOCK ***

[ 1274.378405] 6 locks held by kworker/u8:5/111:
[ 1274.378406]  #0:  ("events_unbound"){.+.+.+}, at: [<ffffffff81058c02>] process_one_work+0x163/0x52b
[ 1274.378410]  #1:  ((&buf->work)){+.+...}, at: [<ffffffff81058c02>] process_one_work+0x163/0x52b
[ 1274.378414]  #2:  (&port->buf.lock/1){+.+...}, at: [<ffffffff812f2550>] flush_to_ldisc+0x25/0xa1
[ 1274.378419]  #3:  (&tty->ldisc_sem){++++.+}, at: [<ffffffff812f1e98>] tty_ldisc_ref+0x1f/0x41
[ 1274.378423]  #4:  (&tty->atomic_write_lock){+.+.+.}, at: [<ffffffff812f2c7e>] tty_port_default_receive_buf+0x4e/0x81
[ 1274.378427]  #5:  (&o_tty->termios_rwsem/1){++++..}, at: [<ffffffff812ee5c7>] isig+0x47/0xd2
[ 1274.378431] 
               stack backtrace:
[ 1274.378434] CPU: 1 PID: 111 Comm: kworker/u8:5 Not tainted 4.12.0-rc1-next-20170522-dbg-00007-gc09b2ab28b74-dirty #1317
[ 1274.378437] Workqueue: events_unbound flush_to_ldisc
[ 1274.378439] Call Trace:
[ 1274.378443]  dump_stack+0x70/0x9a
[ 1274.378445]  print_circular_bug+0x272/0x280
[ 1274.378447]  __lock_acquire+0xec4/0x1444
[ 1274.378450]  lock_acquire+0x183/0x1ae
[ 1274.378452]  ? lock_acquire+0x183/0x1ae
[ 1274.378453]  ? tty_buffer_flush+0x34/0x88
[ 1274.378455]  __mutex_lock+0x95/0x7ba
[ 1274.378457]  ? tty_buffer_flush+0x34/0x88
[ 1274.378459]  ? isig+0x64/0xd2
[ 1274.378460]  ? tty_buffer_flush+0x34/0x88
[ 1274.378462]  ? find_held_lock+0x31/0x77
[ 1274.378464]  mutex_lock_nested+0x1b/0x1d
[ 1274.378466]  ? mutex_lock_nested+0x1b/0x1d
[ 1274.378468]  tty_buffer_flush+0x34/0x88
[ 1274.378470]  pty_flush_buffer+0x27/0x70
[ 1274.378472]  tty_driver_flush_buffer+0x1b/0x1e
[ 1274.378473]  isig+0x9b/0xd2
[ 1274.378475]  n_tty_receive_signal_char+0x1c/0x59
[ 1274.378477]  n_tty_receive_char_special+0xa4/0x740
[ 1274.378479]  n_tty_receive_buf_common+0x452/0x810
[ 1274.378481]  ? lock_acquire+0x183/0x1ae
[ 1274.378484]  n_tty_receive_buf2+0x14/0x16
[ 1274.378485]  tty_ldisc_receive_buf+0x1f/0x4a
[ 1274.378487]  tty_port_default_receive_buf+0x5f/0x81
[ 1274.378489]  flush_to_ldisc+0x87/0xa1
[ 1274.378491]  process_one_work+0x2be/0x52b
[ 1274.378493]  worker_thread+0x1f3/0x2c5
[ 1274.378495]  ? rescuer_thread+0x2ca/0x2ca
[ 1274.378497]  kthread+0x131/0x139
[ 1274.378498]  ? kthread_create_on_node+0x3f/0x3f
[ 1274.378500]  ret_from_fork+0x2e/0x40

	-ss

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [linux-next / tty] possible circular locking dependency detected
  2017-05-22  7:39 [linux-next / tty] possible circular locking dependency detected Sergey Senozhatsky
@ 2017-05-22 10:24 ` Greg Kroah-Hartman
  2017-05-22 10:26   ` Jiri Slaby
                     ` (2 more replies)
  2017-05-30 10:47 ` Geert Uytterhoeven
  1 sibling, 3 replies; 9+ messages in thread
From: Greg Kroah-Hartman @ 2017-05-22 10:24 UTC (permalink / raw)
  To: Sergey Senozhatsky
  Cc: Vegard Nossum, Jiri Slaby, linux-kernel, Sergey Senozhatsky

On Mon, May 22, 2017 at 04:39:43PM +0900, Sergey Senozhatsky wrote:
> Hello,
> 
> [ 1274.378287] ======================================================
> [ 1274.378289] WARNING: possible circular locking dependency detected
> [ 1274.378290] 4.12.0-rc1-next-20170522-dbg-00007-gc09b2ab28b74-dirty #1317 Not tainted
> [ 1274.378291] ------------------------------------------------------
> [ 1274.378293] kworker/u8:5/111 is trying to acquire lock:
> [ 1274.378294]  (&buf->lock){+.+...}, at: [<ffffffff812f2831>] tty_buffer_flush+0x34/0x88
> [ 1274.378300] 
>                but task is already holding lock:
> [ 1274.378301]  (&o_tty->termios_rwsem/1){++++..}, at: [<ffffffff812ee5c7>] isig+0x47/0xd2
> [ 1274.378307] 
>                which lock already depends on the new lock.
> 
> [ 1274.378309] 
>                the existing dependency chain (in reverse order) is:
> [ 1274.378310] 
>                -> #2 (&o_tty->termios_rwsem/1){++++..}:
> [ 1274.378316]        lock_acquire+0x183/0x1ae
> [ 1274.378319]        down_read+0x3e/0x62
> [ 1274.378321]        n_tty_write+0x6c/0x3d6
> [ 1274.378322]        tty_write+0x1cc/0x25f
> [ 1274.378325]        __vfs_write+0x26/0xec
> [ 1274.378327]        vfs_write+0xe1/0x16a
> [ 1274.378329]        SyS_write+0x51/0x8e
> [ 1274.378330]        entry_SYSCALL_64_fastpath+0x18/0xad
> [ 1274.378331] 
>                -> #1 (&tty->atomic_write_lock){+.+.+.}:
> [ 1274.378335]        lock_acquire+0x183/0x1ae
> [ 1274.378337]        __mutex_lock+0x95/0x7ba
> [ 1274.378339]        mutex_lock_nested+0x1b/0x1d
> [ 1274.378340]        tty_port_default_receive_buf+0x4e/0x81
> [ 1274.378342]        flush_to_ldisc+0x87/0xa1
> [ 1274.378345]        process_one_work+0x2be/0x52b
> [ 1274.378346]        worker_thread+0x1f3/0x2c5
> [ 1274.378349]        kthread+0x131/0x139
> [ 1274.378350]        ret_from_fork+0x2e/0x40
> [ 1274.378351] 
>                -> #0 (&buf->lock){+.+...}:
> [ 1274.378355]        __lock_acquire+0xec4/0x1444
> [ 1274.378357]        lock_acquire+0x183/0x1ae
> [ 1274.378358]        __mutex_lock+0x95/0x7ba
> [ 1274.378360]        mutex_lock_nested+0x1b/0x1d
> [ 1274.378362]        tty_buffer_flush+0x34/0x88
> [ 1274.378364]        pty_flush_buffer+0x27/0x70
> [ 1274.378366]        tty_driver_flush_buffer+0x1b/0x1e
> [ 1274.378367]        isig+0x9b/0xd2
> [ 1274.378369]        n_tty_receive_signal_char+0x1c/0x59
> [ 1274.378371]        n_tty_receive_char_special+0xa4/0x740
> [ 1274.378373]        n_tty_receive_buf_common+0x452/0x810
> [ 1274.378374]        n_tty_receive_buf2+0x14/0x16
> [ 1274.378376]        tty_ldisc_receive_buf+0x1f/0x4a
> [ 1274.378377]        tty_port_default_receive_buf+0x5f/0x81
> [ 1274.378379]        flush_to_ldisc+0x87/0xa1
> [ 1274.378380]        process_one_work+0x2be/0x52b
> [ 1274.378382]        worker_thread+0x1f3/0x2c5
> [ 1274.378383]        kthread+0x131/0x139
> [ 1274.378385]        ret_from_fork+0x2e/0x40
> [ 1274.378386] 
>                other info that might help us debug this:
> 
> [ 1274.378387] Chain exists of:
>                  &buf->lock --> &tty->atomic_write_lock --> &o_tty->termios_rwsem/1
> 
> [ 1274.378392]  Possible unsafe locking scenario:
> 
> [ 1274.378393]        CPU0                    CPU1
> [ 1274.378394]        ----                    ----
> [ 1274.378394]   lock(&o_tty->termios_rwsem/1);
> [ 1274.378397]                                lock(&tty->atomic_write_lock);
> [ 1274.378399]                                lock(&o_tty->termios_rwsem/1);
> [ 1274.378402]   lock(&buf->lock);
> [ 1274.378403] 
>                 *** DEADLOCK ***
> 
> [ 1274.378405] 6 locks held by kworker/u8:5/111:
> [ 1274.378406]  #0:  ("events_unbound"){.+.+.+}, at: [<ffffffff81058c02>] process_one_work+0x163/0x52b
> [ 1274.378410]  #1:  ((&buf->work)){+.+...}, at: [<ffffffff81058c02>] process_one_work+0x163/0x52b
> [ 1274.378414]  #2:  (&port->buf.lock/1){+.+...}, at: [<ffffffff812f2550>] flush_to_ldisc+0x25/0xa1
> [ 1274.378419]  #3:  (&tty->ldisc_sem){++++.+}, at: [<ffffffff812f1e98>] tty_ldisc_ref+0x1f/0x41
> [ 1274.378423]  #4:  (&tty->atomic_write_lock){+.+.+.}, at: [<ffffffff812f2c7e>] tty_port_default_receive_buf+0x4e/0x81
> [ 1274.378427]  #5:  (&o_tty->termios_rwsem/1){++++..}, at: [<ffffffff812ee5c7>] isig+0x47/0xd2
> [ 1274.378431] 
>                stack backtrace:
> [ 1274.378434] CPU: 1 PID: 111 Comm: kworker/u8:5 Not tainted 4.12.0-rc1-next-20170522-dbg-00007-gc09b2ab28b74-dirty #1317
> [ 1274.378437] Workqueue: events_unbound flush_to_ldisc
> [ 1274.378439] Call Trace:
> [ 1274.378443]  dump_stack+0x70/0x9a
> [ 1274.378445]  print_circular_bug+0x272/0x280
> [ 1274.378447]  __lock_acquire+0xec4/0x1444
> [ 1274.378450]  lock_acquire+0x183/0x1ae
> [ 1274.378452]  ? lock_acquire+0x183/0x1ae
> [ 1274.378453]  ? tty_buffer_flush+0x34/0x88
> [ 1274.378455]  __mutex_lock+0x95/0x7ba
> [ 1274.378457]  ? tty_buffer_flush+0x34/0x88
> [ 1274.378459]  ? isig+0x64/0xd2
> [ 1274.378460]  ? tty_buffer_flush+0x34/0x88
> [ 1274.378462]  ? find_held_lock+0x31/0x77
> [ 1274.378464]  mutex_lock_nested+0x1b/0x1d
> [ 1274.378466]  ? mutex_lock_nested+0x1b/0x1d
> [ 1274.378468]  tty_buffer_flush+0x34/0x88
> [ 1274.378470]  pty_flush_buffer+0x27/0x70
> [ 1274.378472]  tty_driver_flush_buffer+0x1b/0x1e
> [ 1274.378473]  isig+0x9b/0xd2
> [ 1274.378475]  n_tty_receive_signal_char+0x1c/0x59
> [ 1274.378477]  n_tty_receive_char_special+0xa4/0x740
> [ 1274.378479]  n_tty_receive_buf_common+0x452/0x810
> [ 1274.378481]  ? lock_acquire+0x183/0x1ae
> [ 1274.378484]  n_tty_receive_buf2+0x14/0x16
> [ 1274.378485]  tty_ldisc_receive_buf+0x1f/0x4a
> [ 1274.378487]  tty_port_default_receive_buf+0x5f/0x81
> [ 1274.378489]  flush_to_ldisc+0x87/0xa1
> [ 1274.378491]  process_one_work+0x2be/0x52b
> [ 1274.378493]  worker_thread+0x1f3/0x2c5
> [ 1274.378495]  ? rescuer_thread+0x2ca/0x2ca
> [ 1274.378497]  kthread+0x131/0x139
> [ 1274.378498]  ? kthread_create_on_node+0x3f/0x3f
> [ 1274.378500]  ret_from_fork+0x2e/0x40

Any hint as to what you were doing when this happened?

Does this also show up in 4.11?

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [linux-next / tty] possible circular locking dependency detected
  2017-05-22 10:24 ` Greg Kroah-Hartman
@ 2017-05-22 10:26   ` Jiri Slaby
  2017-05-22 10:27   ` Vegard Nossum
  2017-05-22 10:39   ` Sergey Senozhatsky
  2 siblings, 0 replies; 9+ messages in thread
From: Jiri Slaby @ 2017-05-22 10:26 UTC (permalink / raw)
  To: Greg Kroah-Hartman, Sergey Senozhatsky
  Cc: Sergey Senozhatsky, Vegard Nossum, linux-kernel

On 05/22/2017, 12:24 PM, Greg Kroah-Hartman wrote:
> On Mon, May 22, 2017 at 04:39:43PM +0900, Sergey Senozhatsky wrote:
>> Hello,
>>
>> [ 1274.378287] ======================================================
>> [ 1274.378289] WARNING: possible circular locking dependency detected
>> [ 1274.378290] 4.12.0-rc1-next-20170522-dbg-00007-gc09b2ab28b74-dirty #1317 Not tainted
>> [ 1274.378291] ------------------------------------------------------
>> [ 1274.378293] kworker/u8:5/111 is trying to acquire lock:
>> [ 1274.378294]  (&buf->lock){+.+...}, at: [<ffffffff812f2831>] tty_buffer_flush+0x34/0x88
>> [ 1274.378300] 
>>                but task is already holding lock:
>> [ 1274.378301]  (&o_tty->termios_rwsem/1){++++..}, at: [<ffffffff812ee5c7>] isig+0x47/0xd2
>> [ 1274.378307] 
>>                which lock already depends on the new lock.
>>
>> [ 1274.378309] 
>>                the existing dependency chain (in reverse order) is:
>> [ 1274.378310] 
>>                -> #2 (&o_tty->termios_rwsem/1){++++..}:
>> [ 1274.378316]        lock_acquire+0x183/0x1ae
>> [ 1274.378319]        down_read+0x3e/0x62
>> [ 1274.378321]        n_tty_write+0x6c/0x3d6
>> [ 1274.378322]        tty_write+0x1cc/0x25f
>> [ 1274.378325]        __vfs_write+0x26/0xec
>> [ 1274.378327]        vfs_write+0xe1/0x16a
>> [ 1274.378329]        SyS_write+0x51/0x8e
>> [ 1274.378330]        entry_SYSCALL_64_fastpath+0x18/0xad
>> [ 1274.378331] 
>>                -> #1 (&tty->atomic_write_lock){+.+.+.}:
>> [ 1274.378335]        lock_acquire+0x183/0x1ae
>> [ 1274.378337]        __mutex_lock+0x95/0x7ba
>> [ 1274.378339]        mutex_lock_nested+0x1b/0x1d
>> [ 1274.378340]        tty_port_default_receive_buf+0x4e/0x81

...
> Does this also show up in 4.11?

According to traces, I very believe this is caused by
commit 925bb1ce47f429f69aad35876df7ecd8c53deb7e
Author: Vegard Nossum <vegard.nossum@oracle.com>
Date:   Thu May 11 12:18:52 2017 +0200

    tty: fix port buffer locking

thanks,
-- 
js
suse labs

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [linux-next / tty] possible circular locking dependency detected
  2017-05-22 10:24 ` Greg Kroah-Hartman
  2017-05-22 10:26   ` Jiri Slaby
@ 2017-05-22 10:27   ` Vegard Nossum
  2017-05-29 10:43     ` Vegard Nossum
  2017-05-22 10:39   ` Sergey Senozhatsky
  2 siblings, 1 reply; 9+ messages in thread
From: Vegard Nossum @ 2017-05-22 10:27 UTC (permalink / raw)
  To: Greg Kroah-Hartman, Sergey Senozhatsky
  Cc: Jiri Slaby, linux-kernel, Sergey Senozhatsky

On 05/22/17 12:24, Greg Kroah-Hartman wrote:
> On Mon, May 22, 2017 at 04:39:43PM +0900, Sergey Senozhatsky wrote:
>> Hello,
>>
>> [ 1274.378287] ======================================================
>> [ 1274.378289] WARNING: possible circular locking dependency detected
>> [ 1274.378290] 4.12.0-rc1-next-20170522-dbg-00007-gc09b2ab28b74-dirty #1317 Not tainted
>> [ 1274.378291] ------------------------------------------------------
>> [ 1274.378293] kworker/u8:5/111 is trying to acquire lock:
>> [ 1274.378294]  (&buf->lock){+.+...}, at: [<ffffffff812f2831>] tty_buffer_flush+0x34/0x88
>> [ 1274.378300]
>>                 but task is already holding lock:
>> [ 1274.378301]  (&o_tty->termios_rwsem/1){++++..}, at: [<ffffffff812ee5c7>] isig+0x47/0xd2
>> [ 1274.378307]
>>                 which lock already depends on the new lock.
>>

> Any hint as to what you were doing when this happened?
> 
> Does this also show up in 4.11?

It's my patch "tty: fix port buffer locking" :-/

At a glance, looks related to pty taking the lock on the other side in a
different order. I'll have a closer look.


Vegard

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [linux-next / tty] possible circular locking dependency detected
  2017-05-22 10:24 ` Greg Kroah-Hartman
  2017-05-22 10:26   ` Jiri Slaby
  2017-05-22 10:27   ` Vegard Nossum
@ 2017-05-22 10:39   ` Sergey Senozhatsky
  2 siblings, 0 replies; 9+ messages in thread
From: Sergey Senozhatsky @ 2017-05-22 10:39 UTC (permalink / raw)
  To: Greg Kroah-Hartman
  Cc: Sergey Senozhatsky, Vegard Nossum, Jiri Slaby, linux-kernel,
	Sergey Senozhatsky

On (05/22/17 12:24), Greg Kroah-Hartman wrote:
[..]
> Any hint as to what you were doing when this happened?

nothing special at all. just logged in, basically.

> Does this also show up in 4.11?

seen only once so far.

I'm somewhat suspicious that this might be related to 925bb1ce47f429,
Vegard is in Cc.

	-ss

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [linux-next / tty] possible circular locking dependency detected
  2017-05-22 10:27   ` Vegard Nossum
@ 2017-05-29 10:43     ` Vegard Nossum
  2017-06-03  9:34       ` Greg Kroah-Hartman
  0 siblings, 1 reply; 9+ messages in thread
From: Vegard Nossum @ 2017-05-29 10:43 UTC (permalink / raw)
  To: Greg Kroah-Hartman, Sergey Senozhatsky
  Cc: Jiri Slaby, linux-kernel, Sergey Senozhatsky

On 05/22/17 12:27, Vegard Nossum wrote:
> On 05/22/17 12:24, Greg Kroah-Hartman wrote:
>> On Mon, May 22, 2017 at 04:39:43PM +0900, Sergey Senozhatsky wrote:
>>> Hello,
>>>
>>> [ 1274.378287] ======================================================
>>> [ 1274.378289] WARNING: possible circular locking dependency detected
>>> [ 1274.378290] 4.12.0-rc1-next-20170522-dbg-00007-gc09b2ab28b74-dirty 
>>> #1317 Not tainted
>>> [ 1274.378291] ------------------------------------------------------
>>> [ 1274.378293] kworker/u8:5/111 is trying to acquire lock:
>>> [ 1274.378294]  (&buf->lock){+.+...}, at: [<ffffffff812f2831>] 
>>> tty_buffer_flush+0x34/0x88
>>> [ 1274.378300]
>>>                 but task is already holding lock:
>>> [ 1274.378301]  (&o_tty->termios_rwsem/1){++++..}, at: 
>>> [<ffffffff812ee5c7>] isig+0x47/0xd2
>>> [ 1274.378307]
>>>                 which lock already depends on the new lock.
>>>
> 
>> Any hint as to what you were doing when this happened?
>>
>> Does this also show up in 4.11?
> 
> It's my patch "tty: fix port buffer locking" :-/
> 
> At a glance, looks related to pty taking the lock on the other side in a
> different order. I'll have a closer look.

I can reproduce the lockdep report locally on v4.12-rc3. Looking at it now.


Vegard

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [linux-next / tty] possible circular locking dependency detected
  2017-05-22  7:39 [linux-next / tty] possible circular locking dependency detected Sergey Senozhatsky
  2017-05-22 10:24 ` Greg Kroah-Hartman
@ 2017-05-30 10:47 ` Geert Uytterhoeven
  1 sibling, 0 replies; 9+ messages in thread
From: Geert Uytterhoeven @ 2017-05-30 10:47 UTC (permalink / raw)
  To: Sergey Senozhatsky
  Cc: Greg Kroah-Hartman, Vegard Nossum, Jiri Slaby, linux-kernel,
	Sergey Senozhatsky, Linux-Renesas

On Mon, May 22, 2017 at 9:39 AM, Sergey Senozhatsky
<sergey.senozhatsky.work@gmail.com> wrote:
>
> [ 1274.378287] ======================================================
> [ 1274.378289] WARNING: possible circular locking dependency detected
> [ 1274.378290] 4.12.0-rc1-next-20170522-dbg-00007-gc09b2ab28b74-dirty #1317 Not tainted
> [ 1274.378291] ------------------------------------------------------
> [ 1274.378293] kworker/u8:5/111 is trying to acquire lock:
> [ 1274.378294]  (&buf->lock){+.+...}, at: [<ffffffff812f2831>] tty_buffer_flush+0x34/0x88
> [ 1274.378300]
>                but task is already holding lock:
> [ 1274.378301]  (&o_tty->termios_rwsem/1){++++..}, at: [<ffffffff812ee5c7>] isig+0x47/0xd2
> [ 1274.378307]
>                which lock already depends on the new lock.
>
> [ 1274.378309]
>                the existing dependency chain (in reverse order) is:
> [ 1274.378310]
>                -> #2 (&o_tty->termios_rwsem/1){++++..}:
> [ 1274.378316]        lock_acquire+0x183/0x1ae
> [ 1274.378319]        down_read+0x3e/0x62
> [ 1274.378321]        n_tty_write+0x6c/0x3d6
> [ 1274.378322]        tty_write+0x1cc/0x25f
> [ 1274.378325]        __vfs_write+0x26/0xec
> [ 1274.378327]        vfs_write+0xe1/0x16a
> [ 1274.378329]        SyS_write+0x51/0x8e
> [ 1274.378330]        entry_SYSCALL_64_fastpath+0x18/0xad
> [ 1274.378331]
>                -> #1 (&tty->atomic_write_lock){+.+.+.}:
> [ 1274.378335]        lock_acquire+0x183/0x1ae
> [ 1274.378337]        __mutex_lock+0x95/0x7ba
> [ 1274.378339]        mutex_lock_nested+0x1b/0x1d
> [ 1274.378340]        tty_port_default_receive_buf+0x4e/0x81
> [ 1274.378342]        flush_to_ldisc+0x87/0xa1
> [ 1274.378345]        process_one_work+0x2be/0x52b
> [ 1274.378346]        worker_thread+0x1f3/0x2c5
> [ 1274.378349]        kthread+0x131/0x139
> [ 1274.378350]        ret_from_fork+0x2e/0x40
> [ 1274.378351]
>                -> #0 (&buf->lock){+.+...}:
> [ 1274.378355]        __lock_acquire+0xec4/0x1444
> [ 1274.378357]        lock_acquire+0x183/0x1ae
> [ 1274.378358]        __mutex_lock+0x95/0x7ba
> [ 1274.378360]        mutex_lock_nested+0x1b/0x1d
> [ 1274.378362]        tty_buffer_flush+0x34/0x88
> [ 1274.378364]        pty_flush_buffer+0x27/0x70
> [ 1274.378366]        tty_driver_flush_buffer+0x1b/0x1e
> [ 1274.378367]        isig+0x9b/0xd2
> [ 1274.378369]        n_tty_receive_signal_char+0x1c/0x59
> [ 1274.378371]        n_tty_receive_char_special+0xa4/0x740
> [ 1274.378373]        n_tty_receive_buf_common+0x452/0x810
> [ 1274.378374]        n_tty_receive_buf2+0x14/0x16
> [ 1274.378376]        tty_ldisc_receive_buf+0x1f/0x4a
> [ 1274.378377]        tty_port_default_receive_buf+0x5f/0x81
> [ 1274.378379]        flush_to_ldisc+0x87/0xa1
> [ 1274.378380]        process_one_work+0x2be/0x52b
> [ 1274.378382]        worker_thread+0x1f3/0x2c5
> [ 1274.378383]        kthread+0x131/0x139
> [ 1274.378385]        ret_from_fork+0x2e/0x40
> [ 1274.378386]
>                other info that might help us debug this:

JFTR, I'm also seeing this on various arm/arm64 platforms.

Gr{oetje,eeting}s,

                        Geert

--
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert@linux-m68k.org

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
                                -- Linus Torvalds

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [linux-next / tty] possible circular locking dependency detected
  2017-05-29 10:43     ` Vegard Nossum
@ 2017-06-03  9:34       ` Greg Kroah-Hartman
  2017-06-03 20:50         ` Vegard Nossum
  0 siblings, 1 reply; 9+ messages in thread
From: Greg Kroah-Hartman @ 2017-06-03  9:34 UTC (permalink / raw)
  To: Vegard Nossum
  Cc: Sergey Senozhatsky, Jiri Slaby, linux-kernel, Sergey Senozhatsky

On Mon, May 29, 2017 at 12:43:39PM +0200, Vegard Nossum wrote:
> On 05/22/17 12:27, Vegard Nossum wrote:
> > On 05/22/17 12:24, Greg Kroah-Hartman wrote:
> > > On Mon, May 22, 2017 at 04:39:43PM +0900, Sergey Senozhatsky wrote:
> > > > Hello,
> > > > 
> > > > [ 1274.378287] ======================================================
> > > > [ 1274.378289] WARNING: possible circular locking dependency detected
> > > > [ 1274.378290]
> > > > 4.12.0-rc1-next-20170522-dbg-00007-gc09b2ab28b74-dirty #1317 Not
> > > > tainted
> > > > [ 1274.378291] ------------------------------------------------------
> > > > [ 1274.378293] kworker/u8:5/111 is trying to acquire lock:
> > > > [ 1274.378294]  (&buf->lock){+.+...}, at: [<ffffffff812f2831>]
> > > > tty_buffer_flush+0x34/0x88
> > > > [ 1274.378300]
> > > >                 but task is already holding lock:
> > > > [ 1274.378301]  (&o_tty->termios_rwsem/1){++++..}, at:
> > > > [<ffffffff812ee5c7>] isig+0x47/0xd2
> > > > [ 1274.378307]
> > > >                 which lock already depends on the new lock.
> > > > 
> > 
> > > Any hint as to what you were doing when this happened?
> > > 
> > > Does this also show up in 4.11?
> > 
> > It's my patch "tty: fix port buffer locking" :-/
> > 
> > At a glance, looks related to pty taking the lock on the other side in a
> > different order. I'll have a closer look.
> 
> I can reproduce the lockdep report locally on v4.12-rc3. Looking at it now.

Any ideas?  Or should I just revert the original patch?

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [linux-next / tty] possible circular locking dependency detected
  2017-06-03  9:34       ` Greg Kroah-Hartman
@ 2017-06-03 20:50         ` Vegard Nossum
  0 siblings, 0 replies; 9+ messages in thread
From: Vegard Nossum @ 2017-06-03 20:50 UTC (permalink / raw)
  To: Greg Kroah-Hartman
  Cc: Sergey Senozhatsky, Jiri Slaby, linux-kernel, Sergey Senozhatsky

On 06/03/17 11:34, Greg Kroah-Hartman wrote:
> On Mon, May 29, 2017 at 12:43:39PM +0200, Vegard Nossum wrote:
>> On 05/22/17 12:27, Vegard Nossum wrote:
>>> On 05/22/17 12:24, Greg Kroah-Hartman wrote:
>>>> On Mon, May 22, 2017 at 04:39:43PM +0900, Sergey Senozhatsky wrote:
>>>>> Hello,
>>>>>
>>>>> [ 1274.378287] ======================================================
>>>>> [ 1274.378289] WARNING: possible circular locking dependency detected
>>>>> [ 1274.378290]
>>>>> 4.12.0-rc1-next-20170522-dbg-00007-gc09b2ab28b74-dirty #1317 Not
>>>>> tainted
>>>>> [ 1274.378291] ------------------------------------------------------
>>>>> [ 1274.378293] kworker/u8:5/111 is trying to acquire lock:
>>>>> [ 1274.378294]  (&buf->lock){+.+...}, at: [<ffffffff812f2831>]
>>>>> tty_buffer_flush+0x34/0x88
>>>>> [ 1274.378300]
>>>>>                  but task is already holding lock:
>>>>> [ 1274.378301]  (&o_tty->termios_rwsem/1){++++..}, at:
>>>>> [<ffffffff812ee5c7>] isig+0x47/0xd2
>>>>> [ 1274.378307]
>>>>>                  which lock already depends on the new lock.
>>>>>
>>>
>>>> Any hint as to what you were doing when this happened?
>>>>
>>>> Does this also show up in 4.11?
>>>
>>> It's my patch "tty: fix port buffer locking" :-/
>>>
>>> At a glance, looks related to pty taking the lock on the other side in a
>>> different order. I'll have a closer look.
>>
>> I can reproduce the lockdep report locally on v4.12-rc3. Looking at it now.
> 
> Any ideas?  Or should I just revert the original patch?

I think we must revert it for now, as I can easily reproduce not just
the lockdep warning but actual hangs. It seems I missed some code paths
when I worked the original patch.

I'm working on a fix.


Vegard

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2017-06-03 20:50 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-05-22  7:39 [linux-next / tty] possible circular locking dependency detected Sergey Senozhatsky
2017-05-22 10:24 ` Greg Kroah-Hartman
2017-05-22 10:26   ` Jiri Slaby
2017-05-22 10:27   ` Vegard Nossum
2017-05-29 10:43     ` Vegard Nossum
2017-06-03  9:34       ` Greg Kroah-Hartman
2017-06-03 20:50         ` Vegard Nossum
2017-05-22 10:39   ` Sergey Senozhatsky
2017-05-30 10:47 ` Geert Uytterhoeven

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.