linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] n_tty: release atomic_read_lock before calling schedule_timeout()
@ 2013-07-30 15:35 Artem Savkov
  2013-07-30 16:39 ` Peter Hurley
  0 siblings, 1 reply; 11+ messages in thread
From: Artem Savkov @ 2013-07-30 15:35 UTC (permalink / raw)
  To: peter; +Cc: gregkh, jslaby, linux-kernel, Artem Savkov

ldata->atomic_read_lock should be released before scheduling as well as
tty->termios_rwsem, otherwise there is a potential deadlock detected by lockdep

Introduced in "n_tty: Access termios values safely"
(9356b535fcb71db494fc434acceb79f56d15bda2 in linux-next.git)

[   16.822058] ======================================================
[   16.822058] [ INFO: possible circular locking dependency detected ]
[   16.822058] 3.11.0-rc3-next-20130730+ #140 Tainted: G        W
[   16.822058] -------------------------------------------------------
[   16.822058] bash/1198 is trying to acquire lock:
[   16.822058]  (&tty->termios_rwsem){++++..}, at: [<ffffffff816aa3bb>] n_tty_read+0x49b/0x660
[   16.822058]
[   16.822058] but task is already holding lock:
[   16.822058]  (&ldata->atomic_read_lock){+.+...}, at: [<ffffffff816aa0f0>] n_tty_read+0x1d0/0x660
[   16.822058]
[   16.822058] which lock already depends on the new lock.
[   16.822058]
[   16.822058]
[   16.822058] the existing dependency chain (in reverse order) is:
[   16.822058]
-> #1 (&ldata->atomic_read_lock){+.+...}:
[   16.822058]        [<ffffffff811111cc>] validate_chain+0x73c/0x850
[   16.822058]        [<ffffffff811117e0>] __lock_acquire+0x500/0x5d0
[   16.822058]        [<ffffffff81111a29>] lock_acquire+0x179/0x1d0
[   16.822058]        [<ffffffff81d34b9c>] mutex_lock_interruptible_nested+0x7c/0x540
[   16.822058]        [<ffffffff816aa0f0>] n_tty_read+0x1d0/0x660
[   16.822058]        [<ffffffff816a3bb6>] tty_read+0x86/0xf0
[   16.822058]        [<ffffffff811f21d3>] vfs_read+0xc3/0x130
[   16.822058]        [<ffffffff811f2702>] SyS_read+0x62/0xa0
[   16.822058]        [<ffffffff81d45259>] system_call_fastpath+0x16/0x1b
[   16.822058]
-> #0 (&tty->termios_rwsem){++++..}:
[   16.822058]        [<ffffffff8111064f>] check_prev_add+0x14f/0x590
[   16.822058]        [<ffffffff811111cc>] validate_chain+0x73c/0x850
[   16.822058]        [<ffffffff811117e0>] __lock_acquire+0x500/0x5d0
[   16.822058]        [<ffffffff81111a29>] lock_acquire+0x179/0x1d0
[   16.822058]        [<ffffffff81d372c1>] down_read+0x51/0xa0
[   16.822058]        [<ffffffff816aa3bb>] n_tty_read+0x49b/0x660
[   16.822058]        [<ffffffff816a3bb6>] tty_read+0x86/0xf0
[   16.822058]        [<ffffffff811f21d3>] vfs_read+0xc3/0x130
[   16.822058]        [<ffffffff811f2702>] SyS_read+0x62/0xa0
[   16.822058]        [<ffffffff81d45259>] system_call_fastpath+0x16/0x1b
[   16.822058]
[   16.822058] other info that might help us debug this:
[   16.822058]
[   16.822058]  Possible unsafe locking scenario:
[   16.822058]
[   16.822058]        CPU0                    CPU1
[   16.822058]        ----                    ----
[   16.822058]   lock(&ldata->atomic_read_lock);
[   16.822058]                                lock(&tty->termios_rwsem);
[   16.822058]                                lock(&ldata->atomic_read_lock);
[   16.822058]   lock(&tty->termios_rwsem);
[   16.822058]
[   16.822058]  *** DEADLOCK ***
[   16.822058]
[   16.822058] 2 locks held by bash/1198:
[   16.822058]  #0:  (&tty->ldisc_sem){.+.+.+}, at: [<ffffffff816ade04>] tty_ldisc_ref_wait+0x24/0x60
[   16.822058]  #1:  (&ldata->atomic_read_lock){+.+...}, at: [<ffffffff816aa0f0>] n_tty_read+0x1d0/0x660
[   16.822058]
[   16.822058] stack backtrace:
[   16.822058] CPU: 1 PID: 1198 Comm: bash Tainted: G        W    3.11.0-rc3-next-20130730+ #140
[   16.822058] Hardware name: Bochs Bochs, BIOS Bochs 01/01/2007
[   16.822058]  0000000000000000 ffff880019acdb28 ffffffff81d34074 0000000000000002
[   16.822058]  0000000000000000 ffff880019acdb78 ffffffff8110ed75 ffff880019acdb98
[   16.822058]  ffff880019fd0000 ffff880019acdb78 ffff880019fd0638 ffff880019fd0670
[   16.822058] Call Trace:
[   16.822058]  [<ffffffff81d34074>] dump_stack+0x59/0x7d
[   16.822058]  [<ffffffff8110ed75>] print_circular_bug+0x105/0x120
[   16.822058]  [<ffffffff8111064f>] check_prev_add+0x14f/0x590
[   16.822058]  [<ffffffff81d3ab5f>] ? _raw_spin_unlock_irq+0x4f/0x70
[   16.822058]  [<ffffffff811111cc>] validate_chain+0x73c/0x850
[   16.822058]  [<ffffffff8110ae0f>] ? trace_hardirqs_off_caller+0x1f/0x190
[   16.822058]  [<ffffffff811117e0>] __lock_acquire+0x500/0x5d0
[   16.822058]  [<ffffffff81111a29>] lock_acquire+0x179/0x1d0
[   16.822058]  [<ffffffff816aa3bb>] ? n_tty_read+0x49b/0x660
[   16.822058]  [<ffffffff81d372c1>] down_read+0x51/0xa0
[   16.822058]  [<ffffffff816aa3bb>] ? n_tty_read+0x49b/0x660
[   16.822058]  [<ffffffff816aa3bb>] n_tty_read+0x49b/0x660
[   16.822058]  [<ffffffff810e4130>] ? try_to_wake_up+0x210/0x210
[   16.822058]  [<ffffffff816a3bb6>] tty_read+0x86/0xf0
[   16.822058]  [<ffffffff811f21d3>] vfs_read+0xc3/0x130
[   16.822058]  [<ffffffff811f2702>] SyS_read+0x62/0xa0
[   16.822058]  [<ffffffff815e24ee>] ? trace_hardirqs_on_thunk+0x3a/0x3f
[   16.822058]  [<ffffffff81d45259>] system_call_fastpath+0x16/0x1b

Signed-off-by: Artem Savkov <artem.savkov@gmail.com>
---
 drivers/tty/n_tty.c |   12 ++++++++++++
 1 files changed, 12 insertions(+), 0 deletions(-)

diff --git a/drivers/tty/n_tty.c b/drivers/tty/n_tty.c
index dd8ae0c..38c09db 100644
--- a/drivers/tty/n_tty.c
+++ b/drivers/tty/n_tty.c
@@ -2203,11 +2203,23 @@ static ssize_t n_tty_read(struct tty_struct *tty, struct file *file,
 				break;
 			}
 			n_tty_set_room(tty);
+			mutex_unlock(&ldata->atomic_read_lock);
 			up_read(&tty->termios_rwsem);
 
 			timeout = schedule_timeout(timeout);
 
 			down_read(&tty->termios_rwsem);
+			if (file->f_flags & O_NONBLOCK) {
+				if (!mutex_trylock(&ldata->atomic_read_lock)) {
+					retval = -EAGAIN;
+					break;
+				}
+			} else {
+				if (mutex_lock_interruptible(&ldata->atomic_read_lock)) {
+					retval = -ERESTARTSYS;
+					break;
+				}
+			}
 			continue;
 		}
 		__set_current_state(TASK_RUNNING);
-- 
1.7.0.4


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH] n_tty: release atomic_read_lock before calling schedule_timeout()
  2013-07-30 15:35 [PATCH] n_tty: release atomic_read_lock before calling schedule_timeout() Artem Savkov
@ 2013-07-30 16:39 ` Peter Hurley
  2013-07-31 11:47   ` Artem Savkov
  0 siblings, 1 reply; 11+ messages in thread
From: Peter Hurley @ 2013-07-30 16:39 UTC (permalink / raw)
  To: Artem Savkov, Michel Lespinasse; +Cc: gregkh, jslaby, linux-kernel, Ingo Molnar

On 07/30/2013 11:35 AM, Artem Savkov wrote:
> ldata->atomic_read_lock should be released before scheduling as well as
> tty->termios_rwsem, otherwise there is a potential deadlock detected by lockdep

False positive.

> Introduced in "n_tty: Access termios values safely"
> (9356b535fcb71db494fc434acceb79f56d15bda2 in linux-next.git)
>
> [   16.822058] ======================================================
> [   16.822058] [ INFO: possible circular locking dependency detected ]
> [   16.822058] 3.11.0-rc3-next-20130730+ #140 Tainted: G        W
> [   16.822058] -------------------------------------------------------
> [   16.822058] bash/1198 is trying to acquire lock:
> [   16.822058]  (&tty->termios_rwsem){++++..}, at: [<ffffffff816aa3bb>] n_tty_read+0x49b/0x660
> [   16.822058]
> [   16.822058] but task is already holding lock:
> [   16.822058]  (&ldata->atomic_read_lock){+.+...}, at: [<ffffffff816aa0f0>] n_tty_read+0x1d0/0x660
> [   16.822058]
> [   16.822058] which lock already depends on the new lock.
> [   16.822058]
> [   16.822058]
> [   16.822058] the existing dependency chain (in reverse order) is:
> [   16.822058]
> -> #1 (&ldata->atomic_read_lock){+.+...}:
> [   16.822058]        [<ffffffff811111cc>] validate_chain+0x73c/0x850
> [   16.822058]        [<ffffffff811117e0>] __lock_acquire+0x500/0x5d0
> [   16.822058]        [<ffffffff81111a29>] lock_acquire+0x179/0x1d0
> [   16.822058]        [<ffffffff81d34b9c>] mutex_lock_interruptible_nested+0x7c/0x540
> [   16.822058]        [<ffffffff816aa0f0>] n_tty_read+0x1d0/0x660
> [   16.822058]        [<ffffffff816a3bb6>] tty_read+0x86/0xf0
> [   16.822058]        [<ffffffff811f21d3>] vfs_read+0xc3/0x130
> [   16.822058]        [<ffffffff811f2702>] SyS_read+0x62/0xa0
> [   16.822058]        [<ffffffff81d45259>] system_call_fastpath+0x16/0x1b
> [   16.822058]
> -> #0 (&tty->termios_rwsem){++++..}:
> [   16.822058]        [<ffffffff8111064f>] check_prev_add+0x14f/0x590
> [   16.822058]        [<ffffffff811111cc>] validate_chain+0x73c/0x850
> [   16.822058]        [<ffffffff811117e0>] __lock_acquire+0x500/0x5d0
> [   16.822058]        [<ffffffff81111a29>] lock_acquire+0x179/0x1d0
> [   16.822058]        [<ffffffff81d372c1>] down_read+0x51/0xa0
> [   16.822058]        [<ffffffff816aa3bb>] n_tty_read+0x49b/0x660
> [   16.822058]        [<ffffffff816a3bb6>] tty_read+0x86/0xf0
> [   16.822058]        [<ffffffff811f21d3>] vfs_read+0xc3/0x130
> [   16.822058]        [<ffffffff811f2702>] SyS_read+0x62/0xa0
> [   16.822058]        [<ffffffff81d45259>] system_call_fastpath+0x16/0x1b
> [   16.822058]
> [   16.822058] other info that might help us debug this:
> [   16.822058]
> [   16.822058]  Possible unsafe locking scenario:
> [   16.822058]
> [   16.822058]        CPU0                    CPU1
> [   16.822058]        ----                    ----
> [   16.822058]   lock(&ldata->atomic_read_lock);
> [   16.822058]                                lock(&tty->termios_rwsem);
> [   16.822058]                                lock(&ldata->atomic_read_lock);
> [   16.822058]   lock(&tty->termios_rwsem);
> [   16.822058]
> [   16.822058]  *** DEADLOCK ***

This situation is not possible since termios_rwsem is a read/write semaphore;
CPU1 cannot prevent CPU0 from obtaining a read lock on termios_rwsem.

This looks like a regression caused by:

commit a51805efae5dda0da66f79268ffcf0715f9dbea4
Author: Michel Lespinasse <walken@google.com>
Date:   Mon Jul 8 14:23:49 2013 -0700

     lockdep: Introduce lock_acquire_exclusive()/shared() helper macros

     In lockdep.h, the spinlock/mutex/rwsem/rwlock/lock_map acquire macros have
     different definitions based on the value of CONFIG_PROVE_LOCKING.  We have
     separate ifdefs for each of these definitions, which seems redundant.

     Introduce lock_acquire_{exclusive,shared,shared_recursive} helpers which
     will have different definitions based on CONFIG_PROVE_LOCKING.  Then all
     other helper macros can be defined based on the above ones, which reduces
     the amount of ifdefined code.

     Signed-off-by: Michel Lespinasse <walken@google.com>
     Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
     Signed-off-by: Peter Zijlstra <peterz@infradead.org>
     Cc: Oleg Nesterov <oleg@redhat.com>
     Cc: Lai Jiangshan <laijs@cn.fujitsu.com>
     Cc: "Srivatsa S. Bhat" <srivatsa.bhat@linux.vnet.ibm.com>
     Cc: Rusty Russell <rusty@rustcorp.com.au>
     Cc: Andi Kleen <ak@linux.intel.com>
     Cc: "Paul E. McKenney" <paulmck@us.ibm.com>
     Cc: Steven Rostedt <rostedt@goodmis.org>
     Link: http://lkml.kernel.org/r/20130708212350.6DD1931C15E@corp2gmr1-1.hot.corp.google.com
     Signed-off-by: Ingo Molnar <mingo@kernel.org>


> [   16.822058]
> [   16.822058] 2 locks held by bash/1198:
> [   16.822058]  #0:  (&tty->ldisc_sem){.+.+.+}, at: [<ffffffff816ade04>] tty_ldisc_ref_wait+0x24/0x60
> [   16.822058]  #1:  (&ldata->atomic_read_lock){+.+...}, at: [<ffffffff816aa0f0>] n_tty_read+0x1d0/0x660
> [   16.822058]
> [   16.822058] stack backtrace:
> [   16.822058] CPU: 1 PID: 1198 Comm: bash Tainted: G        W    3.11.0-rc3-next-20130730+ #140
> [   16.822058] Hardware name: Bochs Bochs, BIOS Bochs 01/01/2007
> [   16.822058]  0000000000000000 ffff880019acdb28 ffffffff81d34074 0000000000000002
> [   16.822058]  0000000000000000 ffff880019acdb78 ffffffff8110ed75 ffff880019acdb98
> [   16.822058]  ffff880019fd0000 ffff880019acdb78 ffff880019fd0638 ffff880019fd0670
> [   16.822058] Call Trace:
> [   16.822058]  [<ffffffff81d34074>] dump_stack+0x59/0x7d
> [   16.822058]  [<ffffffff8110ed75>] print_circular_bug+0x105/0x120
> [   16.822058]  [<ffffffff8111064f>] check_prev_add+0x14f/0x590
> [   16.822058]  [<ffffffff81d3ab5f>] ? _raw_spin_unlock_irq+0x4f/0x70
> [   16.822058]  [<ffffffff811111cc>] validate_chain+0x73c/0x850
> [   16.822058]  [<ffffffff8110ae0f>] ? trace_hardirqs_off_caller+0x1f/0x190
> [   16.822058]  [<ffffffff811117e0>] __lock_acquire+0x500/0x5d0
> [   16.822058]  [<ffffffff81111a29>] lock_acquire+0x179/0x1d0
> [   16.822058]  [<ffffffff816aa3bb>] ? n_tty_read+0x49b/0x660
> [   16.822058]  [<ffffffff81d372c1>] down_read+0x51/0xa0
> [   16.822058]  [<ffffffff816aa3bb>] ? n_tty_read+0x49b/0x660
> [   16.822058]  [<ffffffff816aa3bb>] n_tty_read+0x49b/0x660
> [   16.822058]  [<ffffffff810e4130>] ? try_to_wake_up+0x210/0x210
> [   16.822058]  [<ffffffff816a3bb6>] tty_read+0x86/0xf0
> [   16.822058]  [<ffffffff811f21d3>] vfs_read+0xc3/0x130
> [   16.822058]  [<ffffffff811f2702>] SyS_read+0x62/0xa0
> [   16.822058]  [<ffffffff815e24ee>] ? trace_hardirqs_on_thunk+0x3a/0x3f
> [   16.822058]  [<ffffffff81d45259>] system_call_fastpath+0x16/0x1b
>
> Signed-off-by: Artem Savkov <artem.savkov@gmail.com>
> ---
>   drivers/tty/n_tty.c |   12 ++++++++++++
>   1 files changed, 12 insertions(+), 0 deletions(-)
>
> diff --git a/drivers/tty/n_tty.c b/drivers/tty/n_tty.c
> index dd8ae0c..38c09db 100644
> --- a/drivers/tty/n_tty.c
> +++ b/drivers/tty/n_tty.c
> @@ -2203,11 +2203,23 @@ static ssize_t n_tty_read(struct tty_struct *tty, struct file *file,
>   				break;
>   			}
>   			n_tty_set_room(tty);
> +			mutex_unlock(&ldata->atomic_read_lock);
>   			up_read(&tty->termios_rwsem);
>
>   			timeout = schedule_timeout(timeout);
>
>   			down_read(&tty->termios_rwsem);
> +			if (file->f_flags & O_NONBLOCK) {
> +				if (!mutex_trylock(&ldata->atomic_read_lock)) {
> +					retval = -EAGAIN;
> +					break;
> +				}
> +			} else {
> +				if (mutex_lock_interruptible(&ldata->atomic_read_lock)) {
> +					retval = -ERESTARTSYS;
> +					break;
> +				}
> +			}
>   			continue;
>   		}
>   		__set_current_state(TASK_RUNNING);
>


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH] n_tty: release atomic_read_lock before calling schedule_timeout()
  2013-07-30 16:39 ` Peter Hurley
@ 2013-07-31 11:47   ` Artem Savkov
  2013-08-01 20:06     ` Peter Hurley
  2013-08-11 12:04     ` [PATCH tty-next] n_tty: Fix termios_rwsem lockdep false positive Peter Hurley
  0 siblings, 2 replies; 11+ messages in thread
From: Artem Savkov @ 2013-07-31 11:47 UTC (permalink / raw)
  To: Peter Hurley; +Cc: Michel Lespinasse, gregkh, jslaby, linux-kernel, Ingo Molnar

On Tue, Jul 30, 2013 at 12:39:54PM -0400, Peter Hurley wrote:
> On 07/30/2013 11:35 AM, Artem Savkov wrote:
> >ldata->atomic_read_lock should be released before scheduling as well as
> >tty->termios_rwsem, otherwise there is a potential deadlock detected by lockdep
> 
> False positive.
> 
> >Introduced in "n_tty: Access termios values safely"
> >(9356b535fcb71db494fc434acceb79f56d15bda2 in linux-next.git)
> >
> >[   16.822058] ======================================================
> >[   16.822058] [ INFO: possible circular locking dependency detected ]
> >[   16.822058] 3.11.0-rc3-next-20130730+ #140 Tainted: G        W
> >[   16.822058] -------------------------------------------------------
> >[   16.822058] bash/1198 is trying to acquire lock:
> >[   16.822058]  (&tty->termios_rwsem){++++..}, at: [<ffffffff816aa3bb>] n_tty_read+0x49b/0x660
> >[   16.822058]
> >[   16.822058] but task is already holding lock:
> >[   16.822058]  (&ldata->atomic_read_lock){+.+...}, at: [<ffffffff816aa0f0>] n_tty_read+0x1d0/0x660
> >[   16.822058]
> >[   16.822058] which lock already depends on the new lock.
> >[   16.822058]
> >[   16.822058]
> >[   16.822058] the existing dependency chain (in reverse order) is:
> >[   16.822058]
> >-> #1 (&ldata->atomic_read_lock){+.+...}:
> >[   16.822058]        [<ffffffff811111cc>] validate_chain+0x73c/0x850
> >[   16.822058]        [<ffffffff811117e0>] __lock_acquire+0x500/0x5d0
> >[   16.822058]        [<ffffffff81111a29>] lock_acquire+0x179/0x1d0
> >[   16.822058]        [<ffffffff81d34b9c>] mutex_lock_interruptible_nested+0x7c/0x540
> >[   16.822058]        [<ffffffff816aa0f0>] n_tty_read+0x1d0/0x660
> >[   16.822058]        [<ffffffff816a3bb6>] tty_read+0x86/0xf0
> >[   16.822058]        [<ffffffff811f21d3>] vfs_read+0xc3/0x130
> >[   16.822058]        [<ffffffff811f2702>] SyS_read+0x62/0xa0
> >[   16.822058]        [<ffffffff81d45259>] system_call_fastpath+0x16/0x1b
> >[   16.822058]
> >-> #0 (&tty->termios_rwsem){++++..}:
> >[   16.822058]        [<ffffffff8111064f>] check_prev_add+0x14f/0x590
> >[   16.822058]        [<ffffffff811111cc>] validate_chain+0x73c/0x850
> >[   16.822058]        [<ffffffff811117e0>] __lock_acquire+0x500/0x5d0
> >[   16.822058]        [<ffffffff81111a29>] lock_acquire+0x179/0x1d0
> >[   16.822058]        [<ffffffff81d372c1>] down_read+0x51/0xa0
> >[   16.822058]        [<ffffffff816aa3bb>] n_tty_read+0x49b/0x660
> >[   16.822058]        [<ffffffff816a3bb6>] tty_read+0x86/0xf0
> >[   16.822058]        [<ffffffff811f21d3>] vfs_read+0xc3/0x130
> >[   16.822058]        [<ffffffff811f2702>] SyS_read+0x62/0xa0
> >[   16.822058]        [<ffffffff81d45259>] system_call_fastpath+0x16/0x1b
> >[   16.822058]
> >[   16.822058] other info that might help us debug this:
> >[   16.822058]
> >[   16.822058]  Possible unsafe locking scenario:
> >[   16.822058]
> >[   16.822058]        CPU0                    CPU1
> >[   16.822058]        ----                    ----
> >[   16.822058]   lock(&ldata->atomic_read_lock);
> >[   16.822058]                                lock(&tty->termios_rwsem);
> >[   16.822058]                                lock(&ldata->atomic_read_lock);
> >[   16.822058]   lock(&tty->termios_rwsem);
> >[   16.822058]
> >[   16.822058]  *** DEADLOCK ***
> 
> This situation is not possible since termios_rwsem is a read/write semaphore;
> CPU1 cannot prevent CPU0 from obtaining a read lock on termios_rwsem.
Oops, yes, sorry.

> This looks like a regression caused by:
> 
> commit a51805efae5dda0da66f79268ffcf0715f9dbea4
> Author: Michel Lespinasse <walken@google.com>
> Date:   Mon Jul 8 14:23:49 2013 -0700
> 
>     lockdep: Introduce lock_acquire_exclusive()/shared() helper macros
Doesn't seem to be this commit. I see nothing wrong here and just to be
sure I've checked the kernel with this commit reverted. The issue is
still there.


>     In lockdep.h, the spinlock/mutex/rwsem/rwlock/lock_map acquire macros have
>     different definitions based on the value of CONFIG_PROVE_LOCKING.  We have
>     separate ifdefs for each of these definitions, which seems redundant.
> 
>     Introduce lock_acquire_{exclusive,shared,shared_recursive} helpers which
>     will have different definitions based on CONFIG_PROVE_LOCKING.  Then all
>     other helper macros can be defined based on the above ones, which reduces
>     the amount of ifdefined code.
> 
>     Signed-off-by: Michel Lespinasse <walken@google.com>
>     Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
>     Signed-off-by: Peter Zijlstra <peterz@infradead.org>
>     Cc: Oleg Nesterov <oleg@redhat.com>
>     Cc: Lai Jiangshan <laijs@cn.fujitsu.com>
>     Cc: "Srivatsa S. Bhat" <srivatsa.bhat@linux.vnet.ibm.com>
>     Cc: Rusty Russell <rusty@rustcorp.com.au>
>     Cc: Andi Kleen <ak@linux.intel.com>
>     Cc: "Paul E. McKenney" <paulmck@us.ibm.com>
>     Cc: Steven Rostedt <rostedt@goodmis.org>
>     Link: http://lkml.kernel.org/r/20130708212350.6DD1931C15E@corp2gmr1-1.hot.corp.google.com
>     Signed-off-by: Ingo Molnar <mingo@kernel.org>
> 
> 
> >[   16.822058]
> >[   16.822058] 2 locks held by bash/1198:
> >[   16.822058]  #0:  (&tty->ldisc_sem){.+.+.+}, at: [<ffffffff816ade04>] tty_ldisc_ref_wait+0x24/0x60
> >[   16.822058]  #1:  (&ldata->atomic_read_lock){+.+...}, at: [<ffffffff816aa0f0>] n_tty_read+0x1d0/0x660
> >[   16.822058]
> >[   16.822058] stack backtrace:
> >[   16.822058] CPU: 1 PID: 1198 Comm: bash Tainted: G        W    3.11.0-rc3-next-20130730+ #140
> >[   16.822058] Hardware name: Bochs Bochs, BIOS Bochs 01/01/2007
> >[   16.822058]  0000000000000000 ffff880019acdb28 ffffffff81d34074 0000000000000002
> >[   16.822058]  0000000000000000 ffff880019acdb78 ffffffff8110ed75 ffff880019acdb98
> >[   16.822058]  ffff880019fd0000 ffff880019acdb78 ffff880019fd0638 ffff880019fd0670
> >[   16.822058] Call Trace:
> >[   16.822058]  [<ffffffff81d34074>] dump_stack+0x59/0x7d
> >[   16.822058]  [<ffffffff8110ed75>] print_circular_bug+0x105/0x120
> >[   16.822058]  [<ffffffff8111064f>] check_prev_add+0x14f/0x590
> >[   16.822058]  [<ffffffff81d3ab5f>] ? _raw_spin_unlock_irq+0x4f/0x70
> >[   16.822058]  [<ffffffff811111cc>] validate_chain+0x73c/0x850
> >[   16.822058]  [<ffffffff8110ae0f>] ? trace_hardirqs_off_caller+0x1f/0x190
> >[   16.822058]  [<ffffffff811117e0>] __lock_acquire+0x500/0x5d0
> >[   16.822058]  [<ffffffff81111a29>] lock_acquire+0x179/0x1d0
> >[   16.822058]  [<ffffffff816aa3bb>] ? n_tty_read+0x49b/0x660
> >[   16.822058]  [<ffffffff81d372c1>] down_read+0x51/0xa0
> >[   16.822058]  [<ffffffff816aa3bb>] ? n_tty_read+0x49b/0x660
> >[   16.822058]  [<ffffffff816aa3bb>] n_tty_read+0x49b/0x660
> >[   16.822058]  [<ffffffff810e4130>] ? try_to_wake_up+0x210/0x210
> >[   16.822058]  [<ffffffff816a3bb6>] tty_read+0x86/0xf0
> >[   16.822058]  [<ffffffff811f21d3>] vfs_read+0xc3/0x130
> >[   16.822058]  [<ffffffff811f2702>] SyS_read+0x62/0xa0
> >[   16.822058]  [<ffffffff815e24ee>] ? trace_hardirqs_on_thunk+0x3a/0x3f
> >[   16.822058]  [<ffffffff81d45259>] system_call_fastpath+0x16/0x1b
> >
> >Signed-off-by: Artem Savkov <artem.savkov@gmail.com>
> >---
> >  drivers/tty/n_tty.c |   12 ++++++++++++
> >  1 files changed, 12 insertions(+), 0 deletions(-)
> >
> >diff --git a/drivers/tty/n_tty.c b/drivers/tty/n_tty.c
> >index dd8ae0c..38c09db 100644
> >--- a/drivers/tty/n_tty.c
> >+++ b/drivers/tty/n_tty.c
> >@@ -2203,11 +2203,23 @@ static ssize_t n_tty_read(struct tty_struct *tty, struct file *file,
> >  				break;
> >  			}
> >  			n_tty_set_room(tty);
> >+			mutex_unlock(&ldata->atomic_read_lock);
> >  			up_read(&tty->termios_rwsem);
> >
> >  			timeout = schedule_timeout(timeout);
> >
> >  			down_read(&tty->termios_rwsem);
> >+			if (file->f_flags & O_NONBLOCK) {
> >+				if (!mutex_trylock(&ldata->atomic_read_lock)) {
> >+					retval = -EAGAIN;
> >+					break;
> >+				}
> >+			} else {
> >+				if (mutex_lock_interruptible(&ldata->atomic_read_lock)) {
> >+					retval = -ERESTARTSYS;
> >+					break;
> >+				}
> >+			}
> >  			continue;
> >  		}
> >  		__set_current_state(TASK_RUNNING);
> >
> 

-- 
Regards,
    Artem

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH] n_tty: release atomic_read_lock before calling schedule_timeout()
  2013-07-31 11:47   ` Artem Savkov
@ 2013-08-01 20:06     ` Peter Hurley
  2013-08-11 12:04     ` [PATCH tty-next] n_tty: Fix termios_rwsem lockdep false positive Peter Hurley
  1 sibling, 0 replies; 11+ messages in thread
From: Peter Hurley @ 2013-08-01 20:06 UTC (permalink / raw)
  To: Michel Lespinasse, Artem Savkov; +Cc: gregkh, jslaby, linux-kernel, Ingo Molnar

On 07/31/2013 07:47 AM, Artem Savkov wrote:
> On Tue, Jul 30, 2013 at 12:39:54PM -0400, Peter Hurley wrote:
>> On 07/30/2013 11:35 AM, Artem Savkov wrote:
>>> ldata->atomic_read_lock should be released before scheduling as well as
>>> tty->termios_rwsem, otherwise there is a potential deadlock detected by lockdep
>>
>> False positive.
>>
>>> Introduced in "n_tty: Access termios values safely"
>>> (9356b535fcb71db494fc434acceb79f56d15bda2 in linux-next.git)
>>>
>>> [   16.822058] ======================================================
>>> [   16.822058] [ INFO: possible circular locking dependency detected ]
>>> [   16.822058] 3.11.0-rc3-next-20130730+ #140 Tainted: G        W
>>> [   16.822058] -------------------------------------------------------
>>> [   16.822058] bash/1198 is trying to acquire lock:
>>> [   16.822058]  (&tty->termios_rwsem){++++..}, at: [<ffffffff816aa3bb>] n_tty_read+0x49b/0x660
>>> [   16.822058]
>>> [   16.822058] but task is already holding lock:
>>> [   16.822058]  (&ldata->atomic_read_lock){+.+...}, at: [<ffffffff816aa0f0>] n_tty_read+0x1d0/0x660
>>> [   16.822058]
>>> [   16.822058] which lock already depends on the new lock.
>>> [   16.822058]
>>> [   16.822058]
>>> [   16.822058] the existing dependency chain (in reverse order) is:
>>> [   16.822058]
>>> -> #1 (&ldata->atomic_read_lock){+.+...}:
>>> [   16.822058]        [<ffffffff811111cc>] validate_chain+0x73c/0x850
>>> [   16.822058]        [<ffffffff811117e0>] __lock_acquire+0x500/0x5d0
>>> [   16.822058]        [<ffffffff81111a29>] lock_acquire+0x179/0x1d0
>>> [   16.822058]        [<ffffffff81d34b9c>] mutex_lock_interruptible_nested+0x7c/0x540
>>> [   16.822058]        [<ffffffff816aa0f0>] n_tty_read+0x1d0/0x660
>>> [   16.822058]        [<ffffffff816a3bb6>] tty_read+0x86/0xf0
>>> [   16.822058]        [<ffffffff811f21d3>] vfs_read+0xc3/0x130
>>> [   16.822058]        [<ffffffff811f2702>] SyS_read+0x62/0xa0
>>> [   16.822058]        [<ffffffff81d45259>] system_call_fastpath+0x16/0x1b
>>> [   16.822058]
>>> -> #0 (&tty->termios_rwsem){++++..}:
>>> [   16.822058]        [<ffffffff8111064f>] check_prev_add+0x14f/0x590
>>> [   16.822058]        [<ffffffff811111cc>] validate_chain+0x73c/0x850
>>> [   16.822058]        [<ffffffff811117e0>] __lock_acquire+0x500/0x5d0
>>> [   16.822058]        [<ffffffff81111a29>] lock_acquire+0x179/0x1d0
>>> [   16.822058]        [<ffffffff81d372c1>] down_read+0x51/0xa0
>>> [   16.822058]        [<ffffffff816aa3bb>] n_tty_read+0x49b/0x660
>>> [   16.822058]        [<ffffffff816a3bb6>] tty_read+0x86/0xf0
>>> [   16.822058]        [<ffffffff811f21d3>] vfs_read+0xc3/0x130
>>> [   16.822058]        [<ffffffff811f2702>] SyS_read+0x62/0xa0
>>> [   16.822058]        [<ffffffff81d45259>] system_call_fastpath+0x16/0x1b
>>> [   16.822058]
>>> [   16.822058] other info that might help us debug this:
>>> [   16.822058]
>>> [   16.822058]  Possible unsafe locking scenario:
>>> [   16.822058]
>>> [   16.822058]        CPU0                    CPU1
>>> [   16.822058]        ----                    ----
>>> [   16.822058]   lock(&ldata->atomic_read_lock);
>>> [   16.822058]                                lock(&tty->termios_rwsem);
>>> [   16.822058]                                lock(&ldata->atomic_read_lock);
>>> [   16.822058]   lock(&tty->termios_rwsem);
>>> [   16.822058]
>>> [   16.822058]  *** DEADLOCK ***
>>
>> This situation is not possible since termios_rwsem is a read/write semaphore;
>> CPU1 cannot prevent CPU0 from obtaining a read lock on termios_rwsem.
> Oops, yes, sorry.
>
>> This looks like a regression caused by:
>>
>> commit a51805efae5dda0da66f79268ffcf0715f9dbea4
>> Author: Michel Lespinasse <walken@google.com>
>> Date:   Mon Jul 8 14:23:49 2013 -0700
>>
>>      lockdep: Introduce lock_acquire_exclusive()/shared() helper macros
> Doesn't seem to be this commit. I see nothing wrong here and just to be
> sure I've checked the kernel with this commit reverted. The issue is
> still there.

Yes, you're right. Apologies to Michel for the too-hasty blame.

Thanks for the report anyway. I'll track down the lockdep regression
as soon as I fix a real deadlock in the nouveau driver that disables
lockdep.

Regards,
Peter Hurley


^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH tty-next] n_tty: Fix termios_rwsem lockdep false positive
  2013-07-31 11:47   ` Artem Savkov
  2013-08-01 20:06     ` Peter Hurley
@ 2013-08-11 12:04     ` Peter Hurley
  2013-08-12  9:28       ` Artem Savkov
  1 sibling, 1 reply; 11+ messages in thread
From: Peter Hurley @ 2013-08-11 12:04 UTC (permalink / raw)
  To: Greg Kroah-Hartman
  Cc: linux-next, linux-kernel, Jiri Slaby, Artem Savkov,
	Sergey Senozhatsky, Belisko Marek, Peter Hurley

Lockdep reports a circular lock dependency between
atomic_read_lock and termios_rwsem [1]. However, a lock
order deadlock is not possible since CPU1 only holds a
read lock which cannot prevent CPU0 from also acquiring
a read lock on the same r/w semaphore.

Unfortunately, lockdep cannot currently distinguish whether
the locks are read or write for any particular lock graph,
merely that the locks _were_ previously read and/or write.

Until lockdep is fixed, re-order atomic_read_lock so
termios_rwsem can be dropped and reacquired without
triggering lockdep.

Reported-by: Artem Savkov <artem.savkov@gmail.com>
Reported-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Signed-off-by: Peter Hurley <peter@hurleysoftware.com>

[1] Initial lockdep report from Artem Savkov <artem.savkov@gmail.com>

 ======================================================
 [ INFO: possible circular locking dependency detected ]
 3.11.0-rc3-next-20130730+ #140 Tainted: G        W
 -------------------------------------------------------
 bash/1198 is trying to acquire lock:
  (&tty->termios_rwsem){++++..}, at: [<ffffffff816aa3bb>] n_tty_read+0x49b/0x660

 but task is already holding lock:
  (&ldata->atomic_read_lock){+.+...}, at: [<ffffffff816aa0f0>] n_tty_read+0x1d0/0x660

 which lock already depends on the new lock.

 the existing dependency chain (in reverse order) is:

 -> #1 (&ldata->atomic_read_lock){+.+...}:
        [<ffffffff811111cc>] validate_chain+0x73c/0x850
        [<ffffffff811117e0>] __lock_acquire+0x500/0x5d0
        [<ffffffff81111a29>] lock_acquire+0x179/0x1d0
        [<ffffffff81d34b9c>] mutex_lock_interruptible_nested+0x7c/0x540
        [<ffffffff816aa0f0>] n_tty_read+0x1d0/0x660
        [<ffffffff816a3bb6>] tty_read+0x86/0xf0
        [<ffffffff811f21d3>] vfs_read+0xc3/0x130
        [<ffffffff811f2702>] SyS_read+0x62/0xa0
        [<ffffffff81d45259>] system_call_fastpath+0x16/0x1b

 -> #0 (&tty->termios_rwsem){++++..}:
        [<ffffffff8111064f>] check_prev_add+0x14f/0x590
        [<ffffffff811111cc>] validate_chain+0x73c/0x850
        [<ffffffff811117e0>] __lock_acquire+0x500/0x5d0
        [<ffffffff81111a29>] lock_acquire+0x179/0x1d0
        [<ffffffff81d372c1>] down_read+0x51/0xa0
        [<ffffffff816aa3bb>] n_tty_read+0x49b/0x660
        [<ffffffff816a3bb6>] tty_read+0x86/0xf0
        [<ffffffff811f21d3>] vfs_read+0xc3/0x130
        [<ffffffff811f2702>] SyS_read+0x62/0xa0
        [<ffffffff81d45259>] system_call_fastpath+0x16/0x1b

 other info that might help us debug this:

  Possible unsafe locking scenario:

        CPU0                    CPU1
        ----                    ----
   lock(&ldata->atomic_read_lock);
                                lock(&tty->termios_rwsem);
                                lock(&ldata->atomic_read_lock);
   lock(&tty->termios_rwsem);

  *** DEADLOCK ***

 2 locks held by bash/1198:
  #0:  (&tty->ldisc_sem){.+.+.+}, at: [<ffffffff816ade04>] tty_ldisc_ref_wait+0x24/0x60
  #1:  (&ldata->atomic_read_lock){+.+...}, at: [<ffffffff816aa0f0>] n_tty_read+0x1d0/0x660

 stack backtrace:
 CPU: 1 PID: 1198 Comm: bash Tainted: G        W    3.11.0-rc3-next-20130730+ #140
 Hardware name: Bochs Bochs, BIOS Bochs 01/01/2007
  0000000000000000 ffff880019acdb28 ffffffff81d34074 0000000000000002
  0000000000000000 ffff880019acdb78 ffffffff8110ed75 ffff880019acdb98
  ffff880019fd0000 ffff880019acdb78 ffff880019fd0638 ffff880019fd0670
 Call Trace:
  [<ffffffff81d34074>] dump_stack+0x59/0x7d
  [<ffffffff8110ed75>] print_circular_bug+0x105/0x120
  [<ffffffff8111064f>] check_prev_add+0x14f/0x590
  [<ffffffff81d3ab5f>] ? _raw_spin_unlock_irq+0x4f/0x70
  [<ffffffff811111cc>] validate_chain+0x73c/0x850
  [<ffffffff8110ae0f>] ? trace_hardirqs_off_caller+0x1f/0x190
  [<ffffffff811117e0>] __lock_acquire+0x500/0x5d0
  [<ffffffff81111a29>] lock_acquire+0x179/0x1d0
  [<ffffffff816aa3bb>] ? n_tty_read+0x49b/0x660
  [<ffffffff81d372c1>] down_read+0x51/0xa0
  [<ffffffff816aa3bb>] ? n_tty_read+0x49b/0x660
  [<ffffffff816aa3bb>] n_tty_read+0x49b/0x660
  [<ffffffff810e4130>] ? try_to_wake_up+0x210/0x210
  [<ffffffff816a3bb6>] tty_read+0x86/0xf0
  [<ffffffff811f21d3>] vfs_read+0xc3/0x130
  [<ffffffff811f2702>] SyS_read+0x62/0xa0
  [<ffffffff815e24ee>] ? trace_hardirqs_on_thunk+0x3a/0x3f
  [<ffffffff81d45259>] system_call_fastpath+0x16/0x1b
---
 drivers/tty/n_tty.c | 25 +++++++++++--------------
 1 file changed, 11 insertions(+), 14 deletions(-)

diff --git a/drivers/tty/n_tty.c b/drivers/tty/n_tty.c
index dd8ae0c..c9a9ddd 100644
--- a/drivers/tty/n_tty.c
+++ b/drivers/tty/n_tty.c
@@ -2122,6 +2122,17 @@ static ssize_t n_tty_read(struct tty_struct *tty, struct file *file,
 	if (c < 0)
 		return c;
 
+	/*
+	 *	Internal serialization of reads.
+	 */
+	if (file->f_flags & O_NONBLOCK) {
+		if (!mutex_trylock(&ldata->atomic_read_lock))
+			return -EAGAIN;
+	} else {
+		if (mutex_lock_interruptible(&ldata->atomic_read_lock))
+			return -ERESTARTSYS;
+	}
+
 	down_read(&tty->termios_rwsem);
 
 	minimum = time = 0;
@@ -2141,20 +2152,6 @@ static ssize_t n_tty_read(struct tty_struct *tty, struct file *file,
 		}
 	}
 
-	/*
-	 *	Internal serialization of reads.
-	 */
-	if (file->f_flags & O_NONBLOCK) {
-		if (!mutex_trylock(&ldata->atomic_read_lock)) {
-			up_read(&tty->termios_rwsem);
-			return -EAGAIN;
-		}
-	} else {
-		if (mutex_lock_interruptible(&ldata->atomic_read_lock)) {
-			up_read(&tty->termios_rwsem);
-			return -ERESTARTSYS;
-		}
-	}
 	packet = tty->packet;
 
 	add_wait_queue(&tty->read_wait, &wait);
-- 
1.8.1.2


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH tty-next] n_tty: Fix termios_rwsem lockdep false positive
  2013-08-11 12:04     ` [PATCH tty-next] n_tty: Fix termios_rwsem lockdep false positive Peter Hurley
@ 2013-08-12  9:28       ` Artem Savkov
  2013-08-12 10:50         ` Sergey Senozhatsky
  0 siblings, 1 reply; 11+ messages in thread
From: Artem Savkov @ 2013-08-12  9:28 UTC (permalink / raw)
  To: Peter Hurley
  Cc: Greg Kroah-Hartman, linux-next, linux-kernel, Jiri Slaby,
	Sergey Senozhatsky, Belisko Marek

Hi Peter,

On Sun, Aug 11, 2013 at 08:04:23AM -0400, Peter Hurley wrote:
> Lockdep reports a circular lock dependency between
> atomic_read_lock and termios_rwsem [1]. However, a lock
> order deadlock is not possible since CPU1 only holds a
> read lock which cannot prevent CPU0 from also acquiring
> a read lock on the same r/w semaphore.
> 
> Unfortunately, lockdep cannot currently distinguish whether
> the locks are read or write for any particular lock graph,
> merely that the locks _were_ previously read and/or write.
> 
> Until lockdep is fixed, re-order atomic_read_lock so
> termios_rwsem can be dropped and reacquired without
> triggering lockdep.

Works fine, thanks.

Reported-and-tested-by: Artem Savkov <artem.savkov@gmail.com>

> Reported-by: Artem Savkov <artem.savkov@gmail.com>
> Reported-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
> Signed-off-by: Peter Hurley <peter@hurleysoftware.com>
> 
> [1] Initial lockdep report from Artem Savkov <artem.savkov@gmail.com>
> 
>  ======================================================
>  [ INFO: possible circular locking dependency detected ]
>  3.11.0-rc3-next-20130730+ #140 Tainted: G        W
>  -------------------------------------------------------
>  bash/1198 is trying to acquire lock:
>   (&tty->termios_rwsem){++++..}, at: [<ffffffff816aa3bb>] n_tty_read+0x49b/0x660
> 
>  but task is already holding lock:
>   (&ldata->atomic_read_lock){+.+...}, at: [<ffffffff816aa0f0>] n_tty_read+0x1d0/0x660
> 
>  which lock already depends on the new lock.
> 
>  the existing dependency chain (in reverse order) is:
> 
>  -> #1 (&ldata->atomic_read_lock){+.+...}:
>         [<ffffffff811111cc>] validate_chain+0x73c/0x850
>         [<ffffffff811117e0>] __lock_acquire+0x500/0x5d0
>         [<ffffffff81111a29>] lock_acquire+0x179/0x1d0
>         [<ffffffff81d34b9c>] mutex_lock_interruptible_nested+0x7c/0x540
>         [<ffffffff816aa0f0>] n_tty_read+0x1d0/0x660
>         [<ffffffff816a3bb6>] tty_read+0x86/0xf0
>         [<ffffffff811f21d3>] vfs_read+0xc3/0x130
>         [<ffffffff811f2702>] SyS_read+0x62/0xa0
>         [<ffffffff81d45259>] system_call_fastpath+0x16/0x1b
> 
>  -> #0 (&tty->termios_rwsem){++++..}:
>         [<ffffffff8111064f>] check_prev_add+0x14f/0x590
>         [<ffffffff811111cc>] validate_chain+0x73c/0x850
>         [<ffffffff811117e0>] __lock_acquire+0x500/0x5d0
>         [<ffffffff81111a29>] lock_acquire+0x179/0x1d0
>         [<ffffffff81d372c1>] down_read+0x51/0xa0
>         [<ffffffff816aa3bb>] n_tty_read+0x49b/0x660
>         [<ffffffff816a3bb6>] tty_read+0x86/0xf0
>         [<ffffffff811f21d3>] vfs_read+0xc3/0x130
>         [<ffffffff811f2702>] SyS_read+0x62/0xa0
>         [<ffffffff81d45259>] system_call_fastpath+0x16/0x1b
> 
>  other info that might help us debug this:
> 
>   Possible unsafe locking scenario:
> 
>         CPU0                    CPU1
>         ----                    ----
>    lock(&ldata->atomic_read_lock);
>                                 lock(&tty->termios_rwsem);
>                                 lock(&ldata->atomic_read_lock);
>    lock(&tty->termios_rwsem);
> 
>   *** DEADLOCK ***
> 
>  2 locks held by bash/1198:
>   #0:  (&tty->ldisc_sem){.+.+.+}, at: [<ffffffff816ade04>] tty_ldisc_ref_wait+0x24/0x60
>   #1:  (&ldata->atomic_read_lock){+.+...}, at: [<ffffffff816aa0f0>] n_tty_read+0x1d0/0x660
> 
>  stack backtrace:
>  CPU: 1 PID: 1198 Comm: bash Tainted: G        W    3.11.0-rc3-next-20130730+ #140
>  Hardware name: Bochs Bochs, BIOS Bochs 01/01/2007
>   0000000000000000 ffff880019acdb28 ffffffff81d34074 0000000000000002
>   0000000000000000 ffff880019acdb78 ffffffff8110ed75 ffff880019acdb98
>   ffff880019fd0000 ffff880019acdb78 ffff880019fd0638 ffff880019fd0670
>  Call Trace:
>   [<ffffffff81d34074>] dump_stack+0x59/0x7d
>   [<ffffffff8110ed75>] print_circular_bug+0x105/0x120
>   [<ffffffff8111064f>] check_prev_add+0x14f/0x590
>   [<ffffffff81d3ab5f>] ? _raw_spin_unlock_irq+0x4f/0x70
>   [<ffffffff811111cc>] validate_chain+0x73c/0x850
>   [<ffffffff8110ae0f>] ? trace_hardirqs_off_caller+0x1f/0x190
>   [<ffffffff811117e0>] __lock_acquire+0x500/0x5d0
>   [<ffffffff81111a29>] lock_acquire+0x179/0x1d0
>   [<ffffffff816aa3bb>] ? n_tty_read+0x49b/0x660
>   [<ffffffff81d372c1>] down_read+0x51/0xa0
>   [<ffffffff816aa3bb>] ? n_tty_read+0x49b/0x660
>   [<ffffffff816aa3bb>] n_tty_read+0x49b/0x660
>   [<ffffffff810e4130>] ? try_to_wake_up+0x210/0x210
>   [<ffffffff816a3bb6>] tty_read+0x86/0xf0
>   [<ffffffff811f21d3>] vfs_read+0xc3/0x130
>   [<ffffffff811f2702>] SyS_read+0x62/0xa0
>   [<ffffffff815e24ee>] ? trace_hardirqs_on_thunk+0x3a/0x3f
>   [<ffffffff81d45259>] system_call_fastpath+0x16/0x1b
> ---
>  drivers/tty/n_tty.c | 25 +++++++++++--------------
>  1 file changed, 11 insertions(+), 14 deletions(-)
> 
> diff --git a/drivers/tty/n_tty.c b/drivers/tty/n_tty.c
> index dd8ae0c..c9a9ddd 100644
> --- a/drivers/tty/n_tty.c
> +++ b/drivers/tty/n_tty.c
> @@ -2122,6 +2122,17 @@ static ssize_t n_tty_read(struct tty_struct *tty, struct file *file,
>  	if (c < 0)
>  		return c;
>  
> +	/*
> +	 *	Internal serialization of reads.
> +	 */
> +	if (file->f_flags & O_NONBLOCK) {
> +		if (!mutex_trylock(&ldata->atomic_read_lock))
> +			return -EAGAIN;
> +	} else {
> +		if (mutex_lock_interruptible(&ldata->atomic_read_lock))
> +			return -ERESTARTSYS;
> +	}
> +
>  	down_read(&tty->termios_rwsem);
>  
>  	minimum = time = 0;
> @@ -2141,20 +2152,6 @@ static ssize_t n_tty_read(struct tty_struct *tty, struct file *file,
>  		}
>  	}
>  
> -	/*
> -	 *	Internal serialization of reads.
> -	 */
> -	if (file->f_flags & O_NONBLOCK) {
> -		if (!mutex_trylock(&ldata->atomic_read_lock)) {
> -			up_read(&tty->termios_rwsem);
> -			return -EAGAIN;
> -		}
> -	} else {
> -		if (mutex_lock_interruptible(&ldata->atomic_read_lock)) {
> -			up_read(&tty->termios_rwsem);
> -			return -ERESTARTSYS;
> -		}
> -	}
>  	packet = tty->packet;
>  
>  	add_wait_queue(&tty->read_wait, &wait);
> -- 
> 1.8.1.2
> 

-- 
Regards,
    Artem

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH tty-next] n_tty: Fix termios_rwsem lockdep false positive
  2013-08-12  9:28       ` Artem Savkov
@ 2013-08-12 10:50         ` Sergey Senozhatsky
  2013-08-12 12:55           ` Peter Hurley
  0 siblings, 1 reply; 11+ messages in thread
From: Sergey Senozhatsky @ 2013-08-12 10:50 UTC (permalink / raw)
  To: Peter Hurley, Greg Kroah-Hartman, linux-next, linux-kernel,
	Jiri Slaby, Sergey Senozhatsky, Belisko Marek

On (08/12/13 13:28), Artem Savkov wrote:
> Hi Peter,
> 
> On Sun, Aug 11, 2013 at 08:04:23AM -0400, Peter Hurley wrote:
> > Lockdep reports a circular lock dependency between
> > atomic_read_lock and termios_rwsem [1]. However, a lock
> > order deadlock is not possible since CPU1 only holds a
> > read lock which cannot prevent CPU0 from also acquiring
> > a read lock on the same r/w semaphore.
> > 
> > Unfortunately, lockdep cannot currently distinguish whether
> > the locks are read or write for any particular lock graph,
> > merely that the locks _were_ previously read and/or write.
> > 
> > Until lockdep is fixed, re-order atomic_read_lock so
> > termios_rwsem can be dropped and reacquired without
> > triggering lockdep.
> 
> Works fine, thanks.
> 
> Reported-and-tested-by: Artem Savkov <artem.savkov@gmail.com>
> 
> > Reported-by: Artem Savkov <artem.savkov@gmail.com>
> > Reported-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
> > Signed-off-by: Peter Hurley <peter@hurleysoftware.com>
> > 
> > [1] Initial lockdep report from Artem Savkov <artem.savkov@gmail.com>
> > 
> >  ======================================================
> >  [ INFO: possible circular locking dependency detected ]
> >  3.11.0-rc3-next-20130730+ #140 Tainted: G        W
> >  -------------------------------------------------------
> >  bash/1198 is trying to acquire lock:
> >   (&tty->termios_rwsem){++++..}, at: [<ffffffff816aa3bb>] n_tty_read+0x49b/0x660
> > 
> >  but task is already holding lock:
> >   (&ldata->atomic_read_lock){+.+...}, at: [<ffffffff816aa0f0>] n_tty_read+0x1d0/0x660
> > 
> >  which lock already depends on the new lock.
> > 
> >  the existing dependency chain (in reverse order) is:
> > 
> >  -> #1 (&ldata->atomic_read_lock){+.+...}:
> >         [<ffffffff811111cc>] validate_chain+0x73c/0x850
> >         [<ffffffff811117e0>] __lock_acquire+0x500/0x5d0
> >         [<ffffffff81111a29>] lock_acquire+0x179/0x1d0
> >         [<ffffffff81d34b9c>] mutex_lock_interruptible_nested+0x7c/0x540
> >         [<ffffffff816aa0f0>] n_tty_read+0x1d0/0x660
> >         [<ffffffff816a3bb6>] tty_read+0x86/0xf0
> >         [<ffffffff811f21d3>] vfs_read+0xc3/0x130
> >         [<ffffffff811f2702>] SyS_read+0x62/0xa0
> >         [<ffffffff81d45259>] system_call_fastpath+0x16/0x1b
> > 
> >  -> #0 (&tty->termios_rwsem){++++..}:
> >         [<ffffffff8111064f>] check_prev_add+0x14f/0x590
> >         [<ffffffff811111cc>] validate_chain+0x73c/0x850
> >         [<ffffffff811117e0>] __lock_acquire+0x500/0x5d0
> >         [<ffffffff81111a29>] lock_acquire+0x179/0x1d0
> >         [<ffffffff81d372c1>] down_read+0x51/0xa0
> >         [<ffffffff816aa3bb>] n_tty_read+0x49b/0x660
> >         [<ffffffff816a3bb6>] tty_read+0x86/0xf0
> >         [<ffffffff811f21d3>] vfs_read+0xc3/0x130
> >         [<ffffffff811f2702>] SyS_read+0x62/0xa0
> >         [<ffffffff81d45259>] system_call_fastpath+0x16/0x1b
> > 
> >  other info that might help us debug this:
> > 
> >   Possible unsafe locking scenario:
> > 
> >         CPU0                    CPU1
> >         ----                    ----
> >    lock(&ldata->atomic_read_lock);
> >                                 lock(&tty->termios_rwsem);
> >                                 lock(&ldata->atomic_read_lock);
> >    lock(&tty->termios_rwsem);
> > 
> >   *** DEADLOCK ***
> > 
> >  2 locks held by bash/1198:
> >   #0:  (&tty->ldisc_sem){.+.+.+}, at: [<ffffffff816ade04>] tty_ldisc_ref_wait+0x24/0x60
> >   #1:  (&ldata->atomic_read_lock){+.+...}, at: [<ffffffff816aa0f0>] n_tty_read+0x1d0/0x660
> > 
> >  stack backtrace:
> >  CPU: 1 PID: 1198 Comm: bash Tainted: G        W    3.11.0-rc3-next-20130730+ #140
> >  Hardware name: Bochs Bochs, BIOS Bochs 01/01/2007
> >   0000000000000000 ffff880019acdb28 ffffffff81d34074 0000000000000002
> >   0000000000000000 ffff880019acdb78 ffffffff8110ed75 ffff880019acdb98
> >   ffff880019fd0000 ffff880019acdb78 ffff880019fd0638 ffff880019fd0670
> >  Call Trace:
> >   [<ffffffff81d34074>] dump_stack+0x59/0x7d
> >   [<ffffffff8110ed75>] print_circular_bug+0x105/0x120
> >   [<ffffffff8111064f>] check_prev_add+0x14f/0x590
> >   [<ffffffff81d3ab5f>] ? _raw_spin_unlock_irq+0x4f/0x70
> >   [<ffffffff811111cc>] validate_chain+0x73c/0x850
> >   [<ffffffff8110ae0f>] ? trace_hardirqs_off_caller+0x1f/0x190
> >   [<ffffffff811117e0>] __lock_acquire+0x500/0x5d0
> >   [<ffffffff81111a29>] lock_acquire+0x179/0x1d0
> >   [<ffffffff816aa3bb>] ? n_tty_read+0x49b/0x660
> >   [<ffffffff81d372c1>] down_read+0x51/0xa0
> >   [<ffffffff816aa3bb>] ? n_tty_read+0x49b/0x660
> >   [<ffffffff816aa3bb>] n_tty_read+0x49b/0x660
> >   [<ffffffff810e4130>] ? try_to_wake_up+0x210/0x210
> >   [<ffffffff816a3bb6>] tty_read+0x86/0xf0
> >   [<ffffffff811f21d3>] vfs_read+0xc3/0x130
> >   [<ffffffff811f2702>] SyS_read+0x62/0xa0
> >   [<ffffffff815e24ee>] ? trace_hardirqs_on_thunk+0x3a/0x3f
> >   [<ffffffff81d45259>] system_call_fastpath+0x16/0x1b
> > ---
> >  drivers/tty/n_tty.c | 25 +++++++++++--------------
> >  1 file changed, 11 insertions(+), 14 deletions(-)
> >

I hate to do this, but isn't it actually my patch posted here
https://lkml.org/lkml/2013/8/1/510

which was tagged as `wrong'?

	-ss

> > diff --git a/drivers/tty/n_tty.c b/drivers/tty/n_tty.c
> > index dd8ae0c..c9a9ddd 100644
> > --- a/drivers/tty/n_tty.c
> > +++ b/drivers/tty/n_tty.c
> > @@ -2122,6 +2122,17 @@ static ssize_t n_tty_read(struct tty_struct *tty, struct file *file,
> >  	if (c < 0)
> >  		return c;
> >  
> > +	/*
> > +	 *	Internal serialization of reads.
> > +	 */
> > +	if (file->f_flags & O_NONBLOCK) {
> > +		if (!mutex_trylock(&ldata->atomic_read_lock))
> > +			return -EAGAIN;
> > +	} else {
> > +		if (mutex_lock_interruptible(&ldata->atomic_read_lock))
> > +			return -ERESTARTSYS;
> > +	}
> > +
> >  	down_read(&tty->termios_rwsem);
> >  
> >  	minimum = time = 0;
> > @@ -2141,20 +2152,6 @@ static ssize_t n_tty_read(struct tty_struct *tty, struct file *file,
> >  		}
> >  	}
> >  
> > -	/*
> > -	 *	Internal serialization of reads.
> > -	 */
> > -	if (file->f_flags & O_NONBLOCK) {
> > -		if (!mutex_trylock(&ldata->atomic_read_lock)) {
> > -			up_read(&tty->termios_rwsem);
> > -			return -EAGAIN;
> > -		}
> > -	} else {
> > -		if (mutex_lock_interruptible(&ldata->atomic_read_lock)) {
> > -			up_read(&tty->termios_rwsem);
> > -			return -ERESTARTSYS;
> > -		}
> > -	}
> >  	packet = tty->packet;
> >  
> >  	add_wait_queue(&tty->read_wait, &wait);
> > -- 
> > 1.8.1.2
> > 
> 
> -- 
> Regards,
>     Artem
> 

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH tty-next] n_tty: Fix termios_rwsem lockdep false positive
  2013-08-12 10:50         ` Sergey Senozhatsky
@ 2013-08-12 12:55           ` Peter Hurley
  2013-08-12 13:19             ` Sergey Senozhatsky
  0 siblings, 1 reply; 11+ messages in thread
From: Peter Hurley @ 2013-08-12 12:55 UTC (permalink / raw)
  To: Sergey Senozhatsky
  Cc: Greg Kroah-Hartman, linux-next, linux-kernel, Jiri Slaby, Belisko Marek

On 08/12/2013 06:50 AM, Sergey Senozhatsky wrote:
> On (08/12/13 13:28), Artem Savkov wrote:
>> Hi Peter,
>>
>> On Sun, Aug 11, 2013 at 08:04:23AM -0400, Peter Hurley wrote:
>>> Lockdep reports a circular lock dependency between
>>> atomic_read_lock and termios_rwsem [1]. However, a lock
>>> order deadlock is not possible since CPU1 only holds a
>>> read lock which cannot prevent CPU0 from also acquiring
>>> a read lock on the same r/w semaphore.
>>>
>>> Unfortunately, lockdep cannot currently distinguish whether
>>> the locks are read or write for any particular lock graph,
>>> merely that the locks _were_ previously read and/or write.
>>>
>>> Until lockdep is fixed, re-order atomic_read_lock so
>>> termios_rwsem can be dropped and reacquired without
>>> triggering lockdep.
>>
>> Works fine, thanks.
>>
>> Reported-and-tested-by: Artem Savkov <artem.savkov@gmail.com>
>>
>>> Reported-by: Artem Savkov <artem.savkov@gmail.com>
>>> Reported-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
>>> Signed-off-by: Peter Hurley <peter@hurleysoftware.com>
>>>
>>> [1] Initial lockdep report from Artem Savkov <artem.savkov@gmail.com>
>>>
>>>   ======================================================
>>>   [ INFO: possible circular locking dependency detected ]
>>>   3.11.0-rc3-next-20130730+ #140 Tainted: G        W
>>>   -------------------------------------------------------
>>>   bash/1198 is trying to acquire lock:
>>>    (&tty->termios_rwsem){++++..}, at: [<ffffffff816aa3bb>] n_tty_read+0x49b/0x660
>>>
>>>   but task is already holding lock:
>>>    (&ldata->atomic_read_lock){+.+...}, at: [<ffffffff816aa0f0>] n_tty_read+0x1d0/0x660
>>>
>>>   which lock already depends on the new lock.
>>>
>>>   the existing dependency chain (in reverse order) is:
>>>
>>>   -> #1 (&ldata->atomic_read_lock){+.+...}:
>>>          [<ffffffff811111cc>] validate_chain+0x73c/0x850
>>>          [<ffffffff811117e0>] __lock_acquire+0x500/0x5d0
>>>          [<ffffffff81111a29>] lock_acquire+0x179/0x1d0
>>>          [<ffffffff81d34b9c>] mutex_lock_interruptible_nested+0x7c/0x540
>>>          [<ffffffff816aa0f0>] n_tty_read+0x1d0/0x660
>>>          [<ffffffff816a3bb6>] tty_read+0x86/0xf0
>>>          [<ffffffff811f21d3>] vfs_read+0xc3/0x130
>>>          [<ffffffff811f2702>] SyS_read+0x62/0xa0
>>>          [<ffffffff81d45259>] system_call_fastpath+0x16/0x1b
>>>
>>>   -> #0 (&tty->termios_rwsem){++++..}:
>>>          [<ffffffff8111064f>] check_prev_add+0x14f/0x590
>>>          [<ffffffff811111cc>] validate_chain+0x73c/0x850
>>>          [<ffffffff811117e0>] __lock_acquire+0x500/0x5d0
>>>          [<ffffffff81111a29>] lock_acquire+0x179/0x1d0
>>>          [<ffffffff81d372c1>] down_read+0x51/0xa0
>>>          [<ffffffff816aa3bb>] n_tty_read+0x49b/0x660
>>>          [<ffffffff816a3bb6>] tty_read+0x86/0xf0
>>>          [<ffffffff811f21d3>] vfs_read+0xc3/0x130
>>>          [<ffffffff811f2702>] SyS_read+0x62/0xa0
>>>          [<ffffffff81d45259>] system_call_fastpath+0x16/0x1b
>>>
>>>   other info that might help us debug this:
>>>
>>>    Possible unsafe locking scenario:
>>>
>>>          CPU0                    CPU1
>>>          ----                    ----
>>>     lock(&ldata->atomic_read_lock);
>>>                                  lock(&tty->termios_rwsem);
>>>                                  lock(&ldata->atomic_read_lock);
>>>     lock(&tty->termios_rwsem);
>>>
>>>    *** DEADLOCK ***
>>>
>>>   2 locks held by bash/1198:
>>>    #0:  (&tty->ldisc_sem){.+.+.+}, at: [<ffffffff816ade04>] tty_ldisc_ref_wait+0x24/0x60
>>>    #1:  (&ldata->atomic_read_lock){+.+...}, at: [<ffffffff816aa0f0>] n_tty_read+0x1d0/0x660
>>>
>>>   stack backtrace:
>>>   CPU: 1 PID: 1198 Comm: bash Tainted: G        W    3.11.0-rc3-next-20130730+ #140
>>>   Hardware name: Bochs Bochs, BIOS Bochs 01/01/2007
>>>    0000000000000000 ffff880019acdb28 ffffffff81d34074 0000000000000002
>>>    0000000000000000 ffff880019acdb78 ffffffff8110ed75 ffff880019acdb98
>>>    ffff880019fd0000 ffff880019acdb78 ffff880019fd0638 ffff880019fd0670
>>>   Call Trace:
>>>    [<ffffffff81d34074>] dump_stack+0x59/0x7d
>>>    [<ffffffff8110ed75>] print_circular_bug+0x105/0x120
>>>    [<ffffffff8111064f>] check_prev_add+0x14f/0x590
>>>    [<ffffffff81d3ab5f>] ? _raw_spin_unlock_irq+0x4f/0x70
>>>    [<ffffffff811111cc>] validate_chain+0x73c/0x850
>>>    [<ffffffff8110ae0f>] ? trace_hardirqs_off_caller+0x1f/0x190
>>>    [<ffffffff811117e0>] __lock_acquire+0x500/0x5d0
>>>    [<ffffffff81111a29>] lock_acquire+0x179/0x1d0
>>>    [<ffffffff816aa3bb>] ? n_tty_read+0x49b/0x660
>>>    [<ffffffff81d372c1>] down_read+0x51/0xa0
>>>    [<ffffffff816aa3bb>] ? n_tty_read+0x49b/0x660
>>>    [<ffffffff816aa3bb>] n_tty_read+0x49b/0x660
>>>    [<ffffffff810e4130>] ? try_to_wake_up+0x210/0x210
>>>    [<ffffffff816a3bb6>] tty_read+0x86/0xf0
>>>    [<ffffffff811f21d3>] vfs_read+0xc3/0x130
>>>    [<ffffffff811f2702>] SyS_read+0x62/0xa0
>>>    [<ffffffff815e24ee>] ? trace_hardirqs_on_thunk+0x3a/0x3f
>>>    [<ffffffff81d45259>] system_call_fastpath+0x16/0x1b
>>> ---
>>>   drivers/tty/n_tty.c | 25 +++++++++++--------------
>>>   1 file changed, 11 insertions(+), 14 deletions(-)
>>>
>
> I hate to do this, but isn't it actually my patch posted here
> https://lkml.org/lkml/2013/8/1/510
>
> which was tagged as `wrong'?

Sergey,

My apologies; I was mistaken regarding this problem being a lockdep
regression (although it's still a false positive from lockdep). Once
I had worked around some issues with the nouveau driver, I was able to
reproduce the lockdep report on 3.10.

I included Artem's lockdep report in the changelog because I received
that first, on 30 July.

My patch below is not the same as your patch of 1 Aug. This patch
preserves the protected access of termios.c_cc[VMIN] and termios.c_cc[VTIME]
(via the MIN_CHAR() and TIME_CHAR() macros).

If you'd prefer, I could add to changelog:

    Patch based on original posted here https://lkml.org/lkml/2013/8/1/510
    by Sergey Senozhatsky <sergey.senozhatsky@gmail.com>

Regards,
Peter Hurley


>>> diff --git a/drivers/tty/n_tty.c b/drivers/tty/n_tty.c
>>> index dd8ae0c..c9a9ddd 100644
>>> --- a/drivers/tty/n_tty.c
>>> +++ b/drivers/tty/n_tty.c
>>> @@ -2122,6 +2122,17 @@ static ssize_t n_tty_read(struct tty_struct *tty, struct file *file,
>>>   	if (c < 0)
>>>   		return c;
>>>
>>> +	/*
>>> +	 *	Internal serialization of reads.
>>> +	 */
>>> +	if (file->f_flags & O_NONBLOCK) {
>>> +		if (!mutex_trylock(&ldata->atomic_read_lock))
>>> +			return -EAGAIN;
>>> +	} else {
>>> +		if (mutex_lock_interruptible(&ldata->atomic_read_lock))
>>> +			return -ERESTARTSYS;
>>> +	}
>>> +
>>>   	down_read(&tty->termios_rwsem);
>>>
>>>   	minimum = time = 0;
>>> @@ -2141,20 +2152,6 @@ static ssize_t n_tty_read(struct tty_struct *tty, struct file *file,
>>>   		}
>>>   	}
>>>
>>> -	/*
>>> -	 *	Internal serialization of reads.
>>> -	 */
>>> -	if (file->f_flags & O_NONBLOCK) {
>>> -		if (!mutex_trylock(&ldata->atomic_read_lock)) {
>>> -			up_read(&tty->termios_rwsem);
>>> -			return -EAGAIN;
>>> -		}
>>> -	} else {
>>> -		if (mutex_lock_interruptible(&ldata->atomic_read_lock)) {
>>> -			up_read(&tty->termios_rwsem);
>>> -			return -ERESTARTSYS;
>>> -		}
>>> -	}
>>>   	packet = tty->packet;
>>>
>>>   	add_wait_queue(&tty->read_wait, &wait);
>>> --
>>> 1.8.1.2
>>>
>>
>> --
>> Regards,
>>      Artem
>>


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH tty-next] n_tty: Fix termios_rwsem lockdep false positive
  2013-08-12 12:55           ` Peter Hurley
@ 2013-08-12 13:19             ` Sergey Senozhatsky
  2013-08-12 13:39               ` Peter Hurley
  0 siblings, 1 reply; 11+ messages in thread
From: Sergey Senozhatsky @ 2013-08-12 13:19 UTC (permalink / raw)
  To: Peter Hurley
  Cc: Greg Kroah-Hartman, linux-next, linux-kernel, Jiri Slaby, Belisko Marek

On (08/12/13 08:55), Peter Hurley wrote:
> >>> [..]
> >>>  drivers/tty/n_tty.c | 25 +++++++++++--------------
> >>>  1 file changed, 11 insertions(+), 14 deletions(-)
> >>>
> >
> >I hate to do this, but isn't it actually my patch posted here
> >https://lkml.org/lkml/2013/8/1/510
> >
> >which was tagged as `wrong'?
> 
> Sergey,
> 
> My apologies; I was mistaken regarding this problem being a lockdep
> regression (although it's still a false positive from lockdep). Once
> I had worked around some issues with the nouveau driver, I was able to
> reproduce the lockdep report on 3.10.
>
no problem.

> I included Artem's lockdep report in the changelog because I received
> that first, on 30 July.
>
> My patch below is not the same as your patch of 1 Aug. This patch
> preserves the protected access of termios.c_cc[VMIN] and termios.c_cc[VTIME]
> (via the MIN_CHAR() and TIME_CHAR() macros).

fair enough. v3 was protecting VMIN/VTIME (my bad, I noticed this a bit later),
but I didn't submit it since v2 did not get positive response.

> If you'd prefer, I could add to changelog:
> 
>    Patch based on original posted here https://lkml.org/lkml/2013/8/1/510
>    by Sergey Senozhatsky <sergey.senozhatsky@gmail.com>

if you don't mind, that would be great.

thanks a lot,
	-ss

> Regards,
> Peter Hurley
> 
> 
> >>>diff --git a/drivers/tty/n_tty.c b/drivers/tty/n_tty.c
> >>>index dd8ae0c..c9a9ddd 100644
> >>>--- a/drivers/tty/n_tty.c
> >>>+++ b/drivers/tty/n_tty.c
> >>>@@ -2122,6 +2122,17 @@ static ssize_t n_tty_read(struct tty_struct *tty, struct file *file,
> >>>  	if (c < 0)
> >>>  		return c;
> >>>
> >>>+	/*
> >>>+	 *	Internal serialization of reads.
> >>>+	 */
> >>>+	if (file->f_flags & O_NONBLOCK) {
> >>>+		if (!mutex_trylock(&ldata->atomic_read_lock))
> >>>+			return -EAGAIN;
> >>>+	} else {
> >>>+		if (mutex_lock_interruptible(&ldata->atomic_read_lock))
> >>>+			return -ERESTARTSYS;
> >>>+	}
> >>>+
> >>>  	down_read(&tty->termios_rwsem);
> >>>
> >>>  	minimum = time = 0;
> >>>@@ -2141,20 +2152,6 @@ static ssize_t n_tty_read(struct tty_struct *tty, struct file *file,
> >>>  		}
> >>>  	}
> >>>
> >>>-	/*
> >>>-	 *	Internal serialization of reads.
> >>>-	 */
> >>>-	if (file->f_flags & O_NONBLOCK) {
> >>>-		if (!mutex_trylock(&ldata->atomic_read_lock)) {
> >>>-			up_read(&tty->termios_rwsem);
> >>>-			return -EAGAIN;
> >>>-		}
> >>>-	} else {
> >>>-		if (mutex_lock_interruptible(&ldata->atomic_read_lock)) {
> >>>-			up_read(&tty->termios_rwsem);
> >>>-			return -ERESTARTSYS;
> >>>-		}
> >>>-	}
> >>>  	packet = tty->packet;
> >>>
> >>>  	add_wait_queue(&tty->read_wait, &wait);
> >>>--
> >>>1.8.1.2
> >>>
> >>
> >>--
> >>Regards,
> >>     Artem
> >>
> 

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH tty-next] n_tty: Fix termios_rwsem lockdep false positive
  2013-08-12 13:19             ` Sergey Senozhatsky
@ 2013-08-12 13:39               ` Peter Hurley
  2013-08-12 15:53                 ` Greg Kroah-Hartman
  0 siblings, 1 reply; 11+ messages in thread
From: Peter Hurley @ 2013-08-12 13:39 UTC (permalink / raw)
  To: Sergey Senozhatsky, Greg Kroah-Hartman
  Cc: linux-next, linux-kernel, Jiri Slaby, Belisko Marek

On 08/12/2013 09:19 AM, Sergey Senozhatsky wrote:
> On (08/12/13 08:55), Peter Hurley wrote:
>>>>> [..]
>>>>>   drivers/tty/n_tty.c | 25 +++++++++++--------------
>>>>>   1 file changed, 11 insertions(+), 14 deletions(-)
>>>>>
>>>
>>> I hate to do this, but isn't it actually my patch posted here
>>> https://lkml.org/lkml/2013/8/1/510
>>>
>>> which was tagged as `wrong'?
>>
>> Sergey,
>>
>> My apologies; I was mistaken regarding this problem being a lockdep
>> regression (although it's still a false positive from lockdep). Once
>> I had worked around some issues with the nouveau driver, I was able to
>> reproduce the lockdep report on 3.10.
>>
> no problem.
>
>> I included Artem's lockdep report in the changelog because I received
>> that first, on 30 July.
>>
>> My patch below is not the same as your patch of 1 Aug. This patch
>> preserves the protected access of termios.c_cc[VMIN] and termios.c_cc[VTIME]
>> (via the MIN_CHAR() and TIME_CHAR() macros).
>
> fair enough. v3 was protecting VMIN/VTIME (my bad, I noticed this a bit later),
> but I didn't submit it since v2 did not get positive response.
>
>> If you'd prefer, I could add to changelog:
>>
>>     Patch based on original posted here https://lkml.org/lkml/2013/8/1/510
>>     by Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
>
> if you don't mind, that would be great.

Ok.

Greg,
Should I re-spin a v2 to include the note above
(or can you add it with Artem's Tested-by)?

Regards,
Peter Hurley

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH tty-next] n_tty: Fix termios_rwsem lockdep false positive
  2013-08-12 13:39               ` Peter Hurley
@ 2013-08-12 15:53                 ` Greg Kroah-Hartman
  0 siblings, 0 replies; 11+ messages in thread
From: Greg Kroah-Hartman @ 2013-08-12 15:53 UTC (permalink / raw)
  To: Peter Hurley
  Cc: Sergey Senozhatsky, linux-next, linux-kernel, Jiri Slaby, Belisko Marek

On Mon, Aug 12, 2013 at 09:39:01AM -0400, Peter Hurley wrote:
> On 08/12/2013 09:19 AM, Sergey Senozhatsky wrote:
> >On (08/12/13 08:55), Peter Hurley wrote:
> >>>>>[..]
> >>>>>  drivers/tty/n_tty.c | 25 +++++++++++--------------
> >>>>>  1 file changed, 11 insertions(+), 14 deletions(-)
> >>>>>
> >>>
> >>>I hate to do this, but isn't it actually my patch posted here
> >>>https://lkml.org/lkml/2013/8/1/510
> >>>
> >>>which was tagged as `wrong'?
> >>
> >>Sergey,
> >>
> >>My apologies; I was mistaken regarding this problem being a lockdep
> >>regression (although it's still a false positive from lockdep). Once
> >>I had worked around some issues with the nouveau driver, I was able to
> >>reproduce the lockdep report on 3.10.
> >>
> >no problem.
> >
> >>I included Artem's lockdep report in the changelog because I received
> >>that first, on 30 July.
> >>
> >>My patch below is not the same as your patch of 1 Aug. This patch
> >>preserves the protected access of termios.c_cc[VMIN] and termios.c_cc[VTIME]
> >>(via the MIN_CHAR() and TIME_CHAR() macros).
> >
> >fair enough. v3 was protecting VMIN/VTIME (my bad, I noticed this a bit later),
> >but I didn't submit it since v2 did not get positive response.
> >
> >>If you'd prefer, I could add to changelog:
> >>
> >>    Patch based on original posted here https://lkml.org/lkml/2013/8/1/510
> >>    by Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
> >
> >if you don't mind, that would be great.
> 
> Ok.
> 
> Greg,
> Should I re-spin a v2 to include the note above
> (or can you add it with Artem's Tested-by)?

I'll add it, no need to resend.

greg k-h

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2013-08-12 15:52 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-07-30 15:35 [PATCH] n_tty: release atomic_read_lock before calling schedule_timeout() Artem Savkov
2013-07-30 16:39 ` Peter Hurley
2013-07-31 11:47   ` Artem Savkov
2013-08-01 20:06     ` Peter Hurley
2013-08-11 12:04     ` [PATCH tty-next] n_tty: Fix termios_rwsem lockdep false positive Peter Hurley
2013-08-12  9:28       ` Artem Savkov
2013-08-12 10:50         ` Sergey Senozhatsky
2013-08-12 12:55           ` Peter Hurley
2013-08-12 13:19             ` Sergey Senozhatsky
2013-08-12 13:39               ` Peter Hurley
2013-08-12 15:53                 ` Greg Kroah-Hartman

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).