linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* tty lockdep trace
@ 2017-05-31 17:21 Dave Jones
  2017-06-03  6:33 ` [bisected] " Mike Galbraith
  0 siblings, 1 reply; 13+ messages in thread
From: Dave Jones @ 2017-05-31 17:21 UTC (permalink / raw)
  To: Linux Kernel; +Cc: Greg Kroah-Hartman, Jiri Slaby

Just hit this during a trinity run.  sidenote: why is KERN_CONT buggered?

======================================================
WARNING: possible circular locking dependency detected
4.12.0-rc3-think+ #1 Not tainted
------------------------------------------------------
kworker/u8:1/15007 is trying to acquire lock:
 (
&buf->lock
){+.+...}
, at: [<ffffffff815cad67>] tty_buffer_flush+0x37/0xa0

but task is already holding lock:
 (
&o_tty->termios_rwsem
/1
){++++..}
, at: [<ffffffff815c5441>] isig+0x41/0xf0

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #2
 (
&o_tty->termios_rwsem
/1
){++++..}
:
       lock_acquire+0xe7/0x1e0
       down_read+0x48/0xb0
       n_tty_write+0x9f/0x490
       tty_write+0x1b8/0x320
       __vfs_write+0x37/0x140
       vfs_write+0xca/0x1c0
       SyS_write+0x58/0xc0
       do_syscall_64+0x66/0x190
       return_from_SYSCALL_64+0x0/0x7a

-> #1
 (
&tty->atomic_write_lock
){+.+.+.}
:
       lock_acquire+0xe7/0x1e0
       __mutex_lock+0x97/0x9d0
       mutex_lock_nested+0x1b/0x20
       tty_port_default_receive_buf+0x48/0x90
       flush_to_ldisc+0xa1/0xb0
       process_one_work+0x24d/0x680
       worker_thread+0x4e/0x3b0
       kthread+0x117/0x150
       ret_from_fork+0x27/0x40

-> #0
 (
&buf->lock
){+.+...}
:
       __lock_acquire+0x11a2/0x1360
       lock_acquire+0xe7/0x1e0
       __mutex_lock+0x97/0x9d0
       mutex_lock_nested+0x1b/0x20
       tty_buffer_flush+0x37/0xa0
       pty_flush_buffer+0x27/0x80
       tty_driver_flush_buffer+0x1b/0x20
       isig+0x95/0xf0
       n_tty_receive_signal_char+0x1c/0x60
       n_tty_receive_char_special+0x834/0xac0
       n_tty_receive_buf_common+0xa29/0xc80
       n_tty_receive_buf2+0x14/0x20
       tty_ldisc_receive_buf+0x22/0x50
       tty_port_default_receive_buf+0x59/0x90
       flush_to_ldisc+0xa1/0xb0
       process_one_work+0x24d/0x680
       worker_thread+0x4e/0x3b0
       kthread+0x117/0x150
       ret_from_fork+0x27/0x40

other info that might help us debug this:

Chain exists of:
  
&buf->lock
 --> 
&tty->atomic_write_lock
 --> 
&o_tty->termios_rwsem
/1


 Possible unsafe locking scenario:

       CPU0                    CPU1
       ----                    ----
  lock(
&o_tty->termios_rwsem
/1
);
                               lock(
&tty->atomic_write_lock
);
                               lock(
&o_tty->termios_rwsem
/1
);
  lock(
&buf->lock
);

 *** DEADLOCK ***

6 locks held by kworker/u8:1/15007:
 #0: 
 (
"events_unbound"
){.+.+.+}
, at: [<ffffffff810a6522>] process_one_work+0x1c2/0x680
 #1: 
 (
(&buf->work)
){+.+...}
, at: [<ffffffff810a6522>] process_one_work+0x1c2/0x680
 #2: 
 (
&port->buf.lock
/1
){+.+...}
, at: [<ffffffff815ca825>] flush_to_ldisc+0x25/0xb0
 #3: 
 (
&tty->ldisc_sem
){++++.+}
, at: [<ffffffff815c9b9f>] tty_ldisc_ref+0x1f/0x60
 #4: 
 (
&tty->atomic_write_lock
){+.+.+.}
, at: [<ffffffff815cb198>] tty_port_default_receive_buf+0x48/0x90
 #5: 
 (
&o_tty->termios_rwsem
/1
){++++..}
, at: [<ffffffff815c5441>] isig+0x41/0xf0

stack backtrace:
CPU: 0 PID: 15007 Comm: kworker/u8:1 Not tainted 4.12.0-rc3-think+ #1 
Workqueue: events_unbound flush_to_ldisc
Call Trace:
 dump_stack+0x68/0x9f
 print_circular_bug+0x1be/0x210
 __lock_acquire+0x11a2/0x1360
 lock_acquire+0xe7/0x1e0
 ? lock_acquire+0xe7/0x1e0
 ? tty_buffer_flush+0x37/0xa0
 ? tty_buffer_flush+0x37/0xa0
 __mutex_lock+0x97/0x9d0
 ? tty_buffer_flush+0x37/0xa0
 ? debug_lockdep_rcu_enabled+0x1d/0x20
 ? tty_buffer_flush+0x37/0xa0
 ? isig+0x8d/0xf0
 mutex_lock_nested+0x1b/0x20
 ? mutex_lock_nested+0x1b/0x20
 tty_buffer_flush+0x37/0xa0
 pty_flush_buffer+0x27/0x80
 tty_driver_flush_buffer+0x1b/0x20
 isig+0x95/0xf0
 n_tty_receive_signal_char+0x1c/0x60
 n_tty_receive_char_special+0x834/0xac0
 n_tty_receive_buf_common+0xa29/0xc80
 n_tty_receive_buf2+0x14/0x20
 tty_ldisc_receive_buf+0x22/0x50
 ? mutex_lock_nested+0x1b/0x20
 tty_port_default_receive_buf+0x59/0x90
 flush_to_ldisc+0xa1/0xb0
 process_one_work+0x24d/0x680
 worker_thread+0x4e/0x3b0
 kthread+0x117/0x150
 ? process_one_work+0x680/0x680
 ? kthread_create_on_node+0x70/0x70
 ret_from_fork+0x27/0x40

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2017-06-07  2:26 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-05-31 17:21 tty lockdep trace Dave Jones
2017-06-03  6:33 ` [bisected] " Mike Galbraith
2017-06-04  8:32   ` Greg Kroah-Hartman
2017-06-04  9:02     ` Mike Galbraith
2017-06-04 10:00       ` Vegard Nossum
2017-06-04 11:34         ` Mike Galbraith
2017-06-05 15:09           ` Alan Cox
2017-06-05 15:41             ` Mike Galbraith
2017-06-05  4:52         ` [patch] tty: fix port buffer locking V2 Mike Galbraith
2017-06-05  6:48           ` Greg Kroah-Hartman
2017-06-05  7:16             ` Mike Galbraith
2017-06-07  2:07           ` [lkp-robot] [tty] 52b03e644f: calltrace:flush_to_ldisc kernel test robot
2017-06-07  2:25             ` Mike Galbraith

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).