linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Peter Zijlstra <peterz@infradead.org>
To: Dave Jones <davej@codemonkey.org.uk>,
	Linux Kernel <linux-kernel@vger.kernel.org>,
	Ingo Molnar <mingo@redhat.com>,
	Stephane Eranian <eranian@gmail.com>
Cc: Paul McKenney <paulmck@linux.vnet.ibm.com>,
	Andi Kleen <andi@firstfloor.org>
Subject: Re: perf related lockdep bug
Date: Wed, 4 Nov 2015 11:21:51 +0100	[thread overview]
Message-ID: <20151104102151.GG17308@twins.programming.kicks-ass.net> (raw)
In-Reply-To: <20151104051717.GA6098@codemonkey.org.uk>

On Wed, Nov 04, 2015 at 12:17:17AM -0500, Dave Jones wrote:
> This is new to me since todays merge window.
> 

Thanks, 'interesting' indeed. I've one complaint from Andi about his
machine going belly up which might maybe be explained by this.

The problem appears to be due to the new RCU expedited grace period
stuff, with rcu_read_unlock() now randomly trying to acquire locks it
previously didn't.

Lemme go look at those rcu bits again..

---

> [  293.955261] ======================================================
> [  293.955395] [ INFO: possible circular locking dependency detected ]
> [  293.955536] 4.3.0-think+ #3 Not tainted
> [  293.955618] -------------------------------------------------------
> [  293.955756] trinity-c343/12586 is trying to acquire lock:
> [  293.955875]  (rcu_node_0){-.-...}, at: [<ffffffffb00eb734>] rcu_report_exp_cpu_mult+0x24/0x70
> [  293.956070] 
>                but task is already holding lock:
> [  293.956194]  (&ctx->lock){-.....}, at: [<ffffffffb0180453>] perf_lock_task_context+0x53/0x300
> [  293.956386] 
>                which lock already depends on the new lock.
> 
> [  293.956559] 
>                the existing dependency chain (in reverse order) is:
> [  293.956720] 
>                -> #3 (&ctx->lock){-.....}:
> [  293.956807]        [<ffffffffb00d53a3>] lock_acquire+0xc3/0x1d0
> [  293.956940]        [<ffffffffb07efdd1>] _raw_spin_lock+0x41/0x80
> [  293.957075]        [<ffffffffb0185b66>] __perf_event_task_sched_out+0x276/0x700
> [  293.957239]        [<ffffffffb07e9f70>] __schedule+0x530/0xaf0
> [  293.957370]        [<ffffffffb07ea6df>] schedule+0x3f/0x90
> [  293.957493]        [<ffffffffb0001206>] exit_to_usermode_loop+0x76/0xa0
> [  293.957642]        [<ffffffffb0001c13>] prepare_exit_to_usermode+0x53/0x60
> [  293.957795]        [<ffffffffb07f1867>] retint_user+0x8/0x20
> [  293.957923] 
>                -> #2 (&rq->lock){-.-.-.}:
> [  293.958008]        [<ffffffffb00d53a3>] lock_acquire+0xc3/0x1d0
> [  293.958140]        [<ffffffffb07efdd1>] _raw_spin_lock+0x41/0x80
> [  293.958275]        [<ffffffffb00b198d>] sched_move_task+0x4d/0x270
> [  293.958417]        [<ffffffffb00b1bde>] cpu_cgroup_fork+0xe/0x10
> [  293.958551]        [<ffffffffb012a057>] cgroup_post_fork+0x67/0x160
> [  293.958692]        [<ffffffffb007ae0a>] copy_process+0x147a/0x1e10
> [  293.958831]        [<ffffffffb007b927>] _do_fork+0x97/0x680
> [  293.958955]        [<ffffffffb007bf39>] kernel_thread+0x29/0x30
> [  293.959087]        [<ffffffffb07e8477>] rest_init+0x27/0x140
> [  293.959211]        [<ffffffffb0ef1f72>] start_kernel+0x450/0x45d
> [  293.959347]        [<ffffffffb0ef12fe>] x86_64_start_reservations+0x2a/0x2c
> [  293.959503]        [<ffffffffb0ef1468>] x86_64_start_kernel+0x168/0x176
> [  293.959652] 
>                -> #1 (&p->pi_lock){-.-.-.}:
> [  293.959742]        [<ffffffffb00d53a3>] lock_acquire+0xc3/0x1d0
> [  293.959876]        [<ffffffffb07f00ec>] _raw_spin_lock_irqsave+0x4c/0x90
> [  293.960026]        [<ffffffffb00ac070>] try_to_wake_up+0x30/0x5f0
> [  293.960165]        [<ffffffffb00ac653>] wake_up_process+0x23/0x40
> [  293.960300]        [<ffffffffb0f0b7a0>] rcu_spawn_gp_kthread+0xd5/0xfd
> [  293.960449]        [<ffffffffb00002d3>] do_one_initcall+0xb3/0x1f0
> [  293.960587]        [<ffffffffb0ef2080>] kernel_init_freeable+0x101/0x234
> [  293.960736]        [<ffffffffb07e859e>] kernel_init+0xe/0xe0
> [  293.960860]        [<ffffffffb07f10ff>] ret_from_fork+0x3f/0x70
> [  293.960993] 
>                -> #0 (rcu_node_0){-.-...}:
> [  293.961081]        [<ffffffffb00d4722>] __lock_acquire+0x19d2/0x1cf0
> [  293.961222]        [<ffffffffb00d53a3>] lock_acquire+0xc3/0x1d0
> [  293.961351]        [<ffffffffb07f00ec>] _raw_spin_lock_irqsave+0x4c/0x90
> [  293.961501]        [<ffffffffb00eb734>] rcu_report_exp_cpu_mult+0x24/0x70
> [  293.961652]        [<ffffffffb00f48e9>] rcu_read_unlock_special+0x109/0x5e0
> [  293.961808]        [<ffffffffb00e88f7>] __rcu_read_unlock+0x87/0x90
> [  293.961947]        [<ffffffffb018059d>] perf_lock_task_context+0x19d/0x300
> [  293.962101]        [<ffffffffb018b5bf>] perf_event_init_task+0x9f/0x370
> [  293.962248]        [<ffffffffb007a18a>] copy_process+0x7fa/0x1e10
> [  293.962385]        [<ffffffffb007b927>] _do_fork+0x97/0x680
> [  293.962508]        [<ffffffffb007bfb9>] SyS_clone+0x19/0x20
> [  293.962631]        [<ffffffffb07f0d97>] entry_SYSCALL_64_fastpath+0x12/0x6f
> [  293.962787] 
>                other info that might help us debug this:
> 
> [  293.962955] Chain exists of:
>                  rcu_node_0 --> &rq->lock --> &ctx->lock
> 
> [  293.963106]  Possible unsafe locking scenario:
> 
> [  293.963233]        CPU0                    CPU1
> [  293.963332]        ----                    ----
> [  293.963430]   lock(&ctx->lock);
> [  293.963500]                                lock(&rq->lock);
> [  293.963627]                                lock(&ctx->lock);
> [  293.963755]   lock(rcu_node_0);
> [  293.963824] 
>                 *** DEADLOCK ***
> 
> [  293.963947] 2 locks held by trinity-c343/12586:
> [  293.964047]  #0:  (rcu_read_lock){......}, at: [<ffffffffb01804d0>] perf_lock_task_context+0xd0/0x300
> [  293.964258]  #1:  (&ctx->lock){-.....}, at: [<ffffffffb0180453>] perf_lock_task_context+0x53/0x300
> [  293.972934] 
>                stack backtrace:
> [  293.990221] CPU: 3 PID: 12586 Comm: trinity-c343 Not tainted 4.3.0-think+ #3
> [  293.999210]  ffff8804fb6824a0 ffff88044a0efa60 ffffffffb03607bc ffffffffb1779630
> [  294.008115]  ffff88044a0efaa0 ffffffffb00d1453 ffff88044a0efb10 0000000000000002
> [  294.017066]  ffff8804fb682468 ffff8804fb6824a0 0000000000000001 ffff8804fb681b80
> [  294.026109] Call Trace:
> [  294.035079]  [<ffffffffb03607bc>] dump_stack+0x4e/0x82
> [  294.044014]  [<ffffffffb00d1453>] print_circular_bug+0x1e3/0x250
> [  294.052982]  [<ffffffffb00d4722>] __lock_acquire+0x19d2/0x1cf0
> [  294.061926]  [<ffffffffb00cedf9>] ? get_lock_stats+0x19/0x60
> [  294.070878]  [<ffffffffb00d53a3>] lock_acquire+0xc3/0x1d0
> [  294.079828]  [<ffffffffb00eb734>] ? rcu_report_exp_cpu_mult+0x24/0x70
> [  294.088805]  [<ffffffffb07f00ec>] _raw_spin_lock_irqsave+0x4c/0x90
> [  294.097771]  [<ffffffffb00eb734>] ? rcu_report_exp_cpu_mult+0x24/0x70
> [  294.106807]  [<ffffffffb00eb734>] rcu_report_exp_cpu_mult+0x24/0x70
> [  294.115823]  [<ffffffffb00f48e9>] rcu_read_unlock_special+0x109/0x5e0
> [  294.124813]  [<ffffffffb00cedf9>] ? get_lock_stats+0x19/0x60
> [  294.133801]  [<ffffffffb0180453>] ? perf_lock_task_context+0x53/0x300
> [  294.142790]  [<ffffffffb00e88f7>] __rcu_read_unlock+0x87/0x90
> [  294.151675]  [<ffffffffb018059d>] perf_lock_task_context+0x19d/0x300
> [  294.160420]  [<ffffffffb01804d0>] ? perf_lock_task_context+0xd0/0x300
> [  294.169166]  [<ffffffffb018b5bf>] perf_event_init_task+0x9f/0x370
> [  294.177832]  [<ffffffffb00a64a5>] ? preempt_count_sub+0xa5/0xf0
> [  294.186477]  [<ffffffffb007a18a>] copy_process+0x7fa/0x1e10
> [  294.195036]  [<ffffffffb00cedf9>] ? get_lock_stats+0x19/0x60
> [  294.203564]  [<ffffffffb00ced1d>] ? trace_hardirqs_off+0xd/0x10
> [  294.212042]  [<ffffffffb0146101>] ? __acct_update_integrals+0x91/0xf0
> [  294.220411]  [<ffffffffb00a64a5>] ? preempt_count_sub+0xa5/0xf0
> [  294.228778]  [<ffffffffb007b927>] _do_fork+0x97/0x680
> [  294.237038]  [<ffffffffb018fd73>] ? context_tracking_user_exit+0x13/0x20
> [  294.245322]  [<ffffffffb00017ca>] ? enter_from_user_mode+0x1a/0x50
> [  294.253586]  [<ffffffffb00018bc>] ? syscall_trace_enter_phase1+0xbc/0x110
> [  294.261919]  [<ffffffffb0001017>] ? trace_hardirqs_on_thunk+0x17/0x19
> [  294.270306]  [<ffffffffb007bfb9>] SyS_clone+0x19/0x20
> [  294.278672]  [<ffffffffb07f0d97>] entry_SYSCALL_64_fastpath+0x12/0x6f
> 

  reply	other threads:[~2015-11-04 10:22 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-11-04  5:17 perf related lockdep bug Dave Jones
2015-11-04 10:21 ` Peter Zijlstra [this message]
2015-11-04 10:28   ` Peter Zijlstra
2015-11-04 10:50     ` Peter Zijlstra
2015-11-04 13:48       ` Paul E. McKenney
2015-11-04 14:20         ` Peter Zijlstra
2015-11-04 15:34           ` Paul E. McKenney
2015-11-04 15:36           ` Peter Zijlstra
2015-11-04 15:51             ` Paul E. McKenney
2015-11-04 20:58         ` Andi Kleen
2015-11-05  0:55           ` Paul E. McKenney
2015-11-05  1:59             ` Paul E. McKenney
2015-11-05  2:46             ` Andi Kleen
2015-11-05 14:04               ` Paul E. McKenney
2015-11-11 13:29                 ` Paul E. McKenney
2015-11-10  6:39         ` [tip:perf/urgent] perf: Disable IRQs across RCU RS CS that acquires scheduler lock tip-bot for Paul E. McKenney
2015-11-04 14:01     ` perf related lockdep bug Paul E. McKenney
2015-11-04 14:34       ` Peter Zijlstra
2015-11-05  1:57         ` Paul E. McKenney

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20151104102151.GG17308@twins.programming.kicks-ass.net \
    --to=peterz@infradead.org \
    --cc=andi@firstfloor.org \
    --cc=davej@codemonkey.org.uk \
    --cc=eranian@gmail.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@redhat.com \
    --cc=paulmck@linux.vnet.ibm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).