All of lore.kernel.org
 help / color / mirror / Atom feed
From: David Ahern <dsahern@gmail.com>
To: Eric Dumazet <eric.dumazet@gmail.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	Kees Cook <keescook@chromium.org>
Subject: circular locking dependency splat
Date: Tue, 7 Nov 2017 09:35:51 +0900	[thread overview]
Message-ID: <0187c7cb-96c2-752b-4a17-28bbc0ac08d0@gmail.com> (raw)

Saw this trying to reproduce the RA splat. This is from top of tree
net-next:
commit 2798b80b385384d51a81832556ee9ad25d175f9b

[   99.988789] ======================================================
[   99.990494] WARNING: possible circular locking dependency detected
[   99.992187] 4.14.0-rc7+ #23 Not tainted
[   99.993242] ------------------------------------------------------
[   99.994937] swapper/1/0 is trying to acquire lock:
[   99.996272]  (slock-AF_INET){+.-.}, at: [<ffffffff81729b14>]
tcp_delack_timer+0x29/0xb1
[   99.998375]
but task is already holding lock:
[   99.999149]  ((timer)){+.-.}, at: [<ffffffff810f3c0b>]
call_timer_fn+0x5/0x36b
[  100.000105]
which lock already depends on the new lock.

[  100.001184]
the existing dependency chain (in reverse order) is:
[  100.002163]
-> #1 ((timer)){+.-.}:
[  100.002830]        lock_acquire+0x154/0x220
[  100.003392]        del_timer_sync+0x47/0xbd
[  100.003952]        inet_csk_reqsk_queue_drop+0x109/0x141
[  100.004654]        inet_csk_complete_hashdance+0x3b/0x68
[  100.005357]        tcp_check_req+0x517/0x5f1
[  100.005927]        tcp_v4_rcv+0x6ad/0xce7
[  100.006472]        ip_local_deliver_finish+0x1d4/0x281
[  100.007153]        ip_local_deliver+0xaf/0xcf
[  100.007736]        ip_rcv_finish+0x632/0x6ff
[  100.008311]        ip_rcv+0x45d/0x4a6
[  100.008804]        __netif_receive_skb_core+0x588/0x9b0
[  100.009514]        __netif_receive_skb+0x32/0xc5
[  100.010130]        netif_receive_skb_internal+0x66/0x14d
[  100.010839]        napi_gro_receive+0x148/0x205
[  100.011454]        receive_buf+0xfa5/0xfb7
[  100.012001]        virtnet_poll+0x1f0/0x2e4
[  100.012559]        net_rx_action+0x178/0x3c6
[  100.013135]        __do_softirq+0x215/0x555
[  100.013690]        irq_exit+0x71/0xd6
[  100.014182]        do_IRQ+0xe2/0xfa
[  100.014660]        ret_from_intr+0x0/0x1e
[  100.015195]        native_safe_halt+0x6/0x8
[  100.015749]        default_idle+0x24/0x37
[  100.016291]        arch_cpu_idle+0xf/0x11
[  100.016828]        default_idle_call+0x28/0x2f
[  100.017418]        do_idle+0xf5/0x190
[  100.017911]        cpu_startup_entry+0x1f/0x21
[  100.018504]        start_secondary+0x186/0x18c
[  100.019102]        verify_cpu+0x0/0xf1
[  100.019617]
-> #0 (slock-AF_INET){+.-.}:
[  100.020342]        __lock_acquire+0x1168/0x14c5
[  100.020951]        lock_acquire+0x154/0x220
[  100.021512]        _raw_spin_lock+0x34/0x6a
[  100.022078]        tcp_delack_timer+0x29/0xb1
[  100.022672]        call_timer_fn+0x139/0x36b
[  100.023250]        __run_timers.part.35+0x2c2/0x2f7
[  100.023899]        run_timer_softirq+0x52/0x9f
[  100.024493]        __do_softirq+0x215/0x555
[  100.025048]        irq_exit+0x71/0xd6
[  100.025546]        smp_apic_timer_interrupt+0x235/0x32a
[  100.026246]        apic_timer_interrupt+0x9d/0xb0
[  100.026875]        native_safe_halt+0x6/0x8
[  100.027435]        default_idle+0x24/0x37
[  100.027974]        arch_cpu_idle+0xf/0x11
[  100.028512]        default_idle_call+0x28/0x2f
[  100.029109]        do_idle+0xf5/0x190
[  100.029598]        cpu_startup_entry+0x1f/0x21
[  100.030198]        start_secondary+0x186/0x18c
[  100.030790]        verify_cpu+0x0/0xf1
[  100.031290]
other info that might help us debug this:

[  100.032339]  Possible unsafe locking scenario:

[  100.033125]        CPU0                    CPU1
[  100.033722]        ----                    ----
[  100.034335]   lock((timer));
[  100.034728]                                lock(slock-AF_INET);
[  100.035513]                                lock((timer));
[  100.036233]   lock(slock-AF_INET);
[  100.036695]
 *** DEADLOCK ***

[  100.037487] 1 lock held by swapper/1/0:
[  100.037996]  #0:  ((timer)){+.-.}, at: [<ffffffff810f3c0b>]
call_timer_fn+0x5/0x36b
[  100.039008]
stack backtrace:
[  100.039586] CPU: 1 PID: 0 Comm: swapper/1 Not tainted 4.14.0-rc7+ #23
[  100.040434] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996),
BIOS 1.7.5-20140531_083030-gandalf 04/01/2014
[  100.041757] Call Trace:
[  100.042090]  <IRQ>
[  100.042381]  dump_stack+0x81/0xb6
[  100.042831]  print_circular_bug.isra.39+0x22e/0x23c
[  100.043483]  __lock_acquire+0x1168/0x14c5
[  100.044023]  ? mark_lock+0x3a/0x265
[  100.044504]  lock_acquire+0x154/0x220
[  100.045003]  ? lock_acquire+0x154/0x220
[  100.045525]  ? tcp_delack_timer+0x29/0xb1
[  100.046061]  ? tcp_delack_timer_handler+0x1c0/0x1c0
[  100.046719]  _raw_spin_lock+0x34/0x6a
[  100.047211]  ? tcp_delack_timer+0x29/0xb1
[  100.047751]  tcp_delack_timer+0x29/0xb1
[  100.048274]  ? tcp_delack_timer_handler+0x1c0/0x1c0
[  100.048921]  call_timer_fn+0x139/0x36b
[  100.049436]  __run_timers.part.35+0x2c2/0x2f7
[  100.050023]  ? tcp_delack_timer_handler+0x1c0/0x1c0
[  100.050678]  ? kvm_clock_read+0x25/0x2e
[  100.051194]  ? kvm_clock_get_cycles+0x9/0xb
[  100.051753]  ? lock_is_held_type+0x103/0x112
[  100.052328]  run_timer_softirq+0x52/0x9f
[  100.052853]  __do_softirq+0x215/0x555
[  100.053347]  irq_exit+0x71/0xd6
[  100.053794]  smp_apic_timer_interrupt+0x235/0x32a
[  100.054426]  apic_timer_interrupt+0x9d/0xb0
[  100.054993]  </IRQ>
[  100.055287] RIP: 0010:native_safe_halt+0x6/0x8
[  100.055880] RSP: 0018:ffff880035fafea0 EFLAGS: 00000246 ORIG_RAX:
ffffffffffffff10
[  100.056872] RAX: 1ffff10006bf3d00 RBX: 0000000000000000 RCX:
ffffffff810cae10
[  100.057810] RDX: 1ffffffff03e3e31 RSI: 0000000000000007 RDI:
ffff880035f9eafc
[  100.058757] RBP: ffff880035fafea0 R08: dffffc0000000000 R09:
0000000000000001
[  100.059685] R10: fffffbfff03e4188 R11: ffffffff81f20c3b R12:
0000000000000000
[  100.060619] R13: ffff880035f9e300 R14: ffff880035f9e300 R15:
ffff880035f9e300
[  100.061556]  ? trace_hardirqs_on_caller+0x22e/0x274
[  100.062216]  default_idle+0x24/0x37
[  100.062687]  arch_cpu_idle+0xf/0x11
[  100.063159]  default_idle_call+0x28/0x2f
[  100.063691]  do_idle+0xf5/0x190
[  100.064120]  cpu_startup_entry+0x1f/0x21
[  100.064643]  start_secondary+0x186/0x18c
[  100.065168]  secondary_startup_64+0xa5/0xa5
[  115.194378] nr_pdflush_threads exported in /proc is scheduled for removal

             reply	other threads:[~2017-11-07  0:35 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-11-07  0:35 David Ahern [this message]
2017-11-07  1:15 ` circular locking dependency splat Eric Dumazet

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=0187c7cb-96c2-752b-4a17-28bbc0ac08d0@gmail.com \
    --to=dsahern@gmail.com \
    --cc=eric.dumazet@gmail.com \
    --cc=keescook@chromium.org \
    --cc=netdev@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.