From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1757441Ab3EUBWL (ORCPT ); Mon, 20 May 2013 21:22:11 -0400 Received: from gate.crashing.org ([63.228.1.57]:43849 "EHLO gate.crashing.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755916Ab3EUBWK (ORCPT ); Mon, 20 May 2013 21:22:10 -0400 Message-ID: <1369099320.6387.33.camel@pasglop> Subject: lockdep spew from tty From: Benjamin Herrenschmidt To: Greg Kroah-Hartman Cc: Linux Kernel list Date: Tue, 21 May 2013 11:22:00 +1000 Content-Type: text/plain; charset="UTF-8" X-Mailer: Evolution 3.6.4-0ubuntu1 Mime-Version: 1.0 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Greg ! Caught that on a console today running some 3.10-almost-rc2 (based on ec50f2a97a4a7098a81b40030e0bfe28bdc43740). Right now I don't have the bandwidth to investigate but I though you might be interested :-) I'll take another peek if it happens again. ====================================================== [ INFO: possible circular locking dependency detected ] 3.10.0-rc1-test #19 Not tainted ------------------------------------------------------- kworker/24:1/1089 is trying to acquire lock: (&ldata->output_lock){+.+...}, at: [] .process_echoes+0x34/0x2ec but task is already holding lock: ((&buf->work)){+.+...}, at: [] .process_one_work+0x1f8/0x43c which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #2 ((&buf->work)){+.+...}: [] .flush_work+0x38/0x258 [] .__cancel_work_timer+0xe0/0x140 [] .tty_port_destroy+0x14/0x2c [] .vc_deallocate+0xfc/0x128 [] .vt_ioctl+0xae4/0x13a4 [] .tty_ioctl+0xd1c/0xe68 [] .vfs_ioctl+0x44/0x6c [] .do_vfs_ioctl+0x614/0x6ac [] .SyS_ioctl+0x44/0x70 [] syscall_exit+0x0/0x98 -> #1 (console_lock){+.+.+.}: [] .console_lock+0x80/0x98 [] .do_con_write.part.16+0x3c/0x1fb0 [] .con_write+0x28/0x40 [] .n_tty_write+0x28c/0x424 [] .tty_write+0x184/0x238 [] .vfs_write+0xd4/0x1cc [] .SyS_write+0x48/0x7c [] syscall_exit+0x0/0x98 -> #0 (&ldata->output_lock){+.+...}: [] .lock_acquire+0x54/0x70 [] .mutex_lock_nested+0x9c/0x4d4 [] .process_echoes+0x34/0x2ec [] .n_tty_receive_buf+0xc64/0xf90 [] .flush_to_ldisc+0x110/0x1ac [] .process_one_work+0x280/0x43c [] .worker_thread+0x1e0/0x324 [] .kthread+0xc8/0xd4 [] .ret_from_kernel_thread+0x5c/0xb0 other info that might help us debug this: Chain exists of: &ldata->output_lock --> console_lock --> (&buf->work) Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock((&buf->work)); lock(console_lock); lock((&buf->work)); lock(&ldata->output_lock); *** DEADLOCK *** 2 locks held by kworker/24:1/1089: #0: (events){.+.+.+}, at: [] .process_one_work+0x1f8/0x43c #1: ((&buf->work)){+.+...}, at: [] .process_one_work+0x1f8/0x43c stack backtrace: CPU: 24 PID: 1089 Comm: kworker/24:1 Not tainted 3.10.0-rc1-test #19 Workqueue: events .flush_to_ldisc Call Trace: [c000003ed7c37350] [c000000000011b18] .show_stack+0x50/0x14c (unreliable) [c000003ed7c37420] [c00000000070eb90] .dump_stack+0x28/0x3c [c000003ed7c37490] [c00000000070b16c] .print_circular_bug+0x364/0x374 [c000003ed7c37540] [c0000000000a4088] .__lock_acquire+0x14d8/0x1d08 [c000003ed7c37690] [c0000000000a4dc4] .lock_acquire+0x54/0x70 [c000003ed7c37720] [c000000000705780] .mutex_lock_nested+0x9c/0x4d4 [c000003ed7c37830] [c00000000037aa0c] .process_echoes+0x34/0x2ec [c000003ed7c378f0] [c00000000037cc04] .n_tty_receive_buf+0xc64/0xf90 [c000003ed7c37aa0] [c000000000380d3c] .flush_to_ldisc+0x110/0x1ac [c000003ed7c37b60] [c00000000007793c] .process_one_work+0x280/0x43c [c000003ed7c37c20] [c000000000077d10] .worker_thread+0x1e0/0x324 [c000003ed7c37cd0] [c00000000007e360] .kthread+0xc8/0xd4 Cheers, Ben.