linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* possible deadlock in sch_direct_xmit (2)
@ 2020-04-29  0:59 syzbot
  2021-10-29  2:08 ` [syzbot] " syzbot
                   ` (4 more replies)
  0 siblings, 5 replies; 6+ messages in thread
From: syzbot @ 2020-04-29  0:59 UTC (permalink / raw)
  To: davem, jhs, jiri, kuba, linux-kernel, netdev, syzkaller-bugs,
	xiyou.wangcong

Hello,

syzbot found the following crash on:

HEAD commit:    3f2eaebb bpf, riscv: Fix tail call count off by one in RV3..
git tree:       bpf-next
console output: https://syzkaller.appspot.com/x/log.txt?x=120d1808100000
kernel config:  https://syzkaller.appspot.com/x/.config?x=3b755b963c64ac09
dashboard link: https://syzkaller.appspot.com/bug?extid=e18ac85757292b7baf96
compiler:       gcc (GCC) 9.0.0 20181231 (experimental)

Unfortunately, I don't have any reproducer for this crash yet.

IMPORTANT: if you fix the bug, please add the following tag to the commit:
Reported-by: syzbot+e18ac85757292b7baf96@syzkaller.appspotmail.com

======================================================
WARNING: possible circular locking dependency detected
5.7.0-rc1-syzkaller #0 Not tainted
------------------------------------------------------
syz-executor.4/13161 is trying to acquire lock:
ffff8880978ed498 (&dev->qdisc_xmit_lock_key#292){+.-.}-{2:2}, at: spin_lock include/linux/spinlock.h:353 [inline]
ffff8880978ed498 (&dev->qdisc_xmit_lock_key#292){+.-.}-{2:2}, at: __netif_tx_lock include/linux/netdevice.h:4055 [inline]
ffff8880978ed498 (&dev->qdisc_xmit_lock_key#292){+.-.}-{2:2}, at: sch_direct_xmit+0x2be/0xc20 net/sched/sch_generic.c:311

but task is already holding lock:
ffff888099bcc898 (&dev->qdisc_xmit_lock_key#303){+.-.}-{2:2}, at: spin_lock include/linux/spinlock.h:353 [inline]
ffff888099bcc898 (&dev->qdisc_xmit_lock_key#303){+.-.}-{2:2}, at: __netif_tx_lock include/linux/netdevice.h:4055 [inline]
ffff888099bcc898 (&dev->qdisc_xmit_lock_key#303){+.-.}-{2:2}, at: __dev_queue_xmit+0x26ba/0x30a0 net/core/dev.c:4048

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #1 (&dev->qdisc_xmit_lock_key#303){+.-.}-{2:2}:
       __raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline]
       _raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:151
       spin_lock include/linux/spinlock.h:353 [inline]
       __netif_tx_lock include/linux/netdevice.h:4055 [inline]
       __dev_queue_xmit+0x26ba/0x30a0 net/core/dev.c:4048
       neigh_output include/net/neighbour.h:510 [inline]
       ip6_finish_output2+0x1091/0x25b0 net/ipv6/ip6_output.c:117
       __ip6_finish_output+0x442/0xab0 net/ipv6/ip6_output.c:143
       ip6_finish_output+0x34/0x1f0 net/ipv6/ip6_output.c:153
       NF_HOOK_COND include/linux/netfilter.h:296 [inline]
       ip6_output+0x239/0x810 net/ipv6/ip6_output.c:176
       dst_output include/net/dst.h:435 [inline]
       ip6_local_out+0xaf/0x1a0 net/ipv6/output_core.c:179
       ip6_send_skb+0xb4/0x340 net/ipv6/ip6_output.c:1865
       ip6_push_pending_frames+0xbd/0xe0 net/ipv6/ip6_output.c:1885
       icmpv6_push_pending_frames+0x33a/0x530 net/ipv6/icmp.c:304
       icmp6_send+0x1b0b/0x23b0 net/ipv6/icmp.c:617
       icmpv6_send+0xde/0x210 net/ipv6/ip6_icmp.c:43
       ip6_link_failure+0x26/0x520 net/ipv6/route.c:2640
       dst_link_failure include/net/dst.h:418 [inline]
       ip_tunnel_xmit+0x15fc/0x2a65 net/ipv4/ip_tunnel.c:820
       erspan_xmit+0x90d/0x2910 net/ipv4/ip_gre.c:683
       __netdev_start_xmit include/linux/netdevice.h:4574 [inline]
       netdev_start_xmit include/linux/netdevice.h:4588 [inline]
       xmit_one net/core/dev.c:3477 [inline]
       dev_hard_start_xmit+0x1a4/0x9b0 net/core/dev.c:3493
       sch_direct_xmit+0x345/0xc20 net/sched/sch_generic.c:313
       qdisc_restart net/sched/sch_generic.c:376 [inline]
       __qdisc_run+0x4d1/0x17b0 net/sched/sch_generic.c:384
       qdisc_run include/net/pkt_sched.h:134 [inline]
       qdisc_run include/net/pkt_sched.h:126 [inline]
       __dev_xmit_skb net/core/dev.c:3668 [inline]
       __dev_queue_xmit+0x2115/0x30a0 net/core/dev.c:4021
       neigh_resolve_output net/core/neighbour.c:1489 [inline]
       neigh_resolve_output+0x566/0x930 net/core/neighbour.c:1469
       neigh_output include/net/neighbour.h:510 [inline]
       ip6_finish_output2+0x1091/0x25b0 net/ipv6/ip6_output.c:117
       __ip6_finish_output+0x442/0xab0 net/ipv6/ip6_output.c:143
       ip6_finish_output+0x34/0x1f0 net/ipv6/ip6_output.c:153
       NF_HOOK_COND include/linux/netfilter.h:296 [inline]
       ip6_output+0x239/0x810 net/ipv6/ip6_output.c:176
       dst_output include/net/dst.h:435 [inline]
       NF_HOOK include/linux/netfilter.h:307 [inline]
       rawv6_send_hdrinc net/ipv6/raw.c:687 [inline]
       rawv6_sendmsg+0x20f6/0x3900 net/ipv6/raw.c:944
       inet_sendmsg+0x99/0xe0 net/ipv4/af_inet.c:807
       sock_sendmsg_nosec net/socket.c:652 [inline]
       sock_sendmsg+0xcf/0x120 net/socket.c:672
       ____sys_sendmsg+0x6bf/0x7e0 net/socket.c:2362
       ___sys_sendmsg+0x100/0x170 net/socket.c:2416
       __sys_sendmsg+0xec/0x1b0 net/socket.c:2449
       do_syscall_64+0xf6/0x7d0 arch/x86/entry/common.c:295
       entry_SYSCALL_64_after_hwframe+0x49/0xb3

-> #0 (&dev->qdisc_xmit_lock_key#292){+.-.}-{2:2}:
       check_prev_add kernel/locking/lockdep.c:2515 [inline]
       check_prevs_add kernel/locking/lockdep.c:2620 [inline]
       validate_chain kernel/locking/lockdep.c:3237 [inline]
       __lock_acquire+0x2ab1/0x4c50 kernel/locking/lockdep.c:4355
       lock_acquire+0x1f2/0x8f0 kernel/locking/lockdep.c:4934
       __raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline]
       _raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:151
       spin_lock include/linux/spinlock.h:353 [inline]
       __netif_tx_lock include/linux/netdevice.h:4055 [inline]
       sch_direct_xmit+0x2be/0xc20 net/sched/sch_generic.c:311
       qdisc_restart net/sched/sch_generic.c:376 [inline]
       __qdisc_run+0x4d1/0x17b0 net/sched/sch_generic.c:384
       qdisc_run include/net/pkt_sched.h:134 [inline]
       qdisc_run include/net/pkt_sched.h:126 [inline]
       __dev_xmit_skb net/core/dev.c:3668 [inline]
       __dev_queue_xmit+0x2115/0x30a0 net/core/dev.c:4021
       neigh_resolve_output net/core/neighbour.c:1489 [inline]
       neigh_resolve_output+0x566/0x930 net/core/neighbour.c:1469
       neigh_output include/net/neighbour.h:510 [inline]
       ip6_finish_output2+0x1091/0x25b0 net/ipv6/ip6_output.c:117
       __ip6_finish_output+0x442/0xab0 net/ipv6/ip6_output.c:143
       ip6_finish_output+0x34/0x1f0 net/ipv6/ip6_output.c:153
       NF_HOOK_COND include/linux/netfilter.h:296 [inline]
       ip6_output+0x239/0x810 net/ipv6/ip6_output.c:176
       dst_output include/net/dst.h:435 [inline]
       NF_HOOK include/linux/netfilter.h:307 [inline]
       ndisc_send_skb+0xf40/0x14b0 net/ipv6/ndisc.c:506
       ndisc_send_ns+0x3b0/0x860 net/ipv6/ndisc.c:648
       ndisc_solicit+0x2ed/0x470 net/ipv6/ndisc.c:740
       neigh_probe+0xcc/0x110 net/core/neighbour.c:1009
       __neigh_event_send+0x3d4/0x16d0 net/core/neighbour.c:1170
       neigh_event_send include/net/neighbour.h:444 [inline]
       neigh_resolve_output+0x590/0x930 net/core/neighbour.c:1473
       neigh_output include/net/neighbour.h:510 [inline]
       ip6_finish_output2+0x1091/0x25b0 net/ipv6/ip6_output.c:117
       __ip6_finish_output+0x442/0xab0 net/ipv6/ip6_output.c:143
       ip6_finish_output+0x34/0x1f0 net/ipv6/ip6_output.c:153
       NF_HOOK_COND include/linux/netfilter.h:296 [inline]
       ip6_output+0x239/0x810 net/ipv6/ip6_output.c:176
       dst_output include/net/dst.h:435 [inline]
       ip6_local_out+0xaf/0x1a0 net/ipv6/output_core.c:179
       ip6_send_skb+0xb4/0x340 net/ipv6/ip6_output.c:1865
       ip6_push_pending_frames+0xbd/0xe0 net/ipv6/ip6_output.c:1885
       icmpv6_push_pending_frames+0x33a/0x530 net/ipv6/icmp.c:304
       icmp6_send+0x1b0b/0x23b0 net/ipv6/icmp.c:617
       icmpv6_send+0xde/0x210 net/ipv6/ip6_icmp.c:43
       ip6_link_failure+0x26/0x520 net/ipv6/route.c:2640
       dst_link_failure include/net/dst.h:418 [inline]
       vti6_xmit net/ipv6/ip6_vti.c:537 [inline]
       vti6_tnl_xmit+0xfd4/0x1d30 net/ipv6/ip6_vti.c:576
       __netdev_start_xmit include/linux/netdevice.h:4574 [inline]
       netdev_start_xmit include/linux/netdevice.h:4588 [inline]
       xmit_one net/core/dev.c:3477 [inline]
       dev_hard_start_xmit+0x1a4/0x9b0 net/core/dev.c:3493
       __dev_queue_xmit+0x25e1/0x30a0 net/core/dev.c:4052
       packet_snd net/packet/af_packet.c:2979 [inline]
       packet_sendmsg+0x23cc/0x5ce0 net/packet/af_packet.c:3004
       sock_sendmsg_nosec net/socket.c:652 [inline]
       sock_sendmsg+0xcf/0x120 net/socket.c:672
       ____sys_sendmsg+0x6bf/0x7e0 net/socket.c:2362
       ___sys_sendmsg+0x100/0x170 net/socket.c:2416
       __sys_sendmsg+0xec/0x1b0 net/socket.c:2449
       do_syscall_64+0xf6/0x7d0 arch/x86/entry/common.c:295
       entry_SYSCALL_64_after_hwframe+0x49/0xb3

other info that might help us debug this:

 Possible unsafe locking scenario:

       CPU0                    CPU1
       ----                    ----
  lock(&dev->qdisc_xmit_lock_key#303);
                               lock(&dev->qdisc_xmit_lock_key#292);
                               lock(&dev->qdisc_xmit_lock_key#303);
  lock(&dev->qdisc_xmit_lock_key#292);

 *** DEADLOCK ***

11 locks held by syz-executor.4/13161:
 #0: ffffffff899beca0 (rcu_read_lock_bh){....}-{1:2}, at: __dev_queue_xmit+0x214/0x30a0 net/core/dev.c:3987
 #1: ffff888099bcc898 (&dev->qdisc_xmit_lock_key#303){+.-.}-{2:2}, at: spin_lock include/linux/spinlock.h:353 [inline]
 #1: ffff888099bcc898 (&dev->qdisc_xmit_lock_key#303){+.-.}-{2:2}, at: __netif_tx_lock include/linux/netdevice.h:4055 [inline]
 #1: ffff888099bcc898 (&dev->qdisc_xmit_lock_key#303){+.-.}-{2:2}, at: __dev_queue_xmit+0x26ba/0x30a0 net/core/dev.c:4048
 #2: ffffffff899bed00 (rcu_read_lock){....}-{1:2}, at: icmpv6_send+0x0/0x210 net/ipv6/ip6_icmp.c:31
 #3: ffff888087823260 (k-slock-AF_INET6){+.-.}-{2:2}, at: spin_trylock include/linux/spinlock.h:363 [inline]
 #3: ffff888087823260 (k-slock-AF_INET6){+.-.}-{2:2}, at: icmpv6_xmit_lock net/ipv6/icmp.c:117 [inline]
 #3: ffff888087823260 (k-slock-AF_INET6){+.-.}-{2:2}, at: icmp6_send+0xde8/0x23b0 net/ipv6/icmp.c:538
 #4: ffffffff899bed00 (rcu_read_lock){....}-{1:2}, at: icmp6_send+0x13cd/0x23b0 net/ipv6/icmp.c:598
 #5: ffffffff899beca0 (rcu_read_lock_bh){....}-{1:2}, at: lwtunnel_xmit_redirect include/net/lwtunnel.h:92 [inline]
 #5: ffffffff899beca0 (rcu_read_lock_bh){....}-{1:2}, at: ip6_finish_output2+0x215/0x25b0 net/ipv6/ip6_output.c:103
 #6: ffffffff899bed00 (rcu_read_lock){....}-{1:2}, at: ip6_nd_hdr net/ipv6/ndisc.c:464 [inline]
 #6: ffffffff899bed00 (rcu_read_lock){....}-{1:2}, at: ndisc_send_skb+0x80a/0x14b0 net/ipv6/ndisc.c:500
 #7: ffffffff899beca0 (rcu_read_lock_bh){....}-{1:2}, at: lwtunnel_xmit_redirect include/net/lwtunnel.h:92 [inline]
 #7: ffffffff899beca0 (rcu_read_lock_bh){....}-{1:2}, at: ip6_finish_output2+0x215/0x25b0 net/ipv6/ip6_output.c:103
 #8: ffffffff899beca0 (rcu_read_lock_bh){....}-{1:2}, at: __dev_queue_xmit+0x214/0x30a0 net/core/dev.c:3987
 #9: ffff8880a208d258 (&dev->qdisc_tx_busylock_key#45){+...}-{2:2}, at: spin_trylock include/linux/spinlock.h:363 [inline]
 #9: ffff8880a208d258 (&dev->qdisc_tx_busylock_key#45){+...}-{2:2}, at: qdisc_run_begin include/net/sch_generic.h:159 [inline]
 #9: ffff8880a208d258 (&dev->qdisc_tx_busylock_key#45){+...}-{2:2}, at: qdisc_run include/net/pkt_sched.h:128 [inline]
 #9: ffff8880a208d258 (&dev->qdisc_tx_busylock_key#45){+...}-{2:2}, at: __dev_xmit_skb net/core/dev.c:3668 [inline]
 #9: ffff8880a208d258 (&dev->qdisc_tx_busylock_key#45){+...}-{2:2}, at: __dev_queue_xmit+0x27d6/0x30a0 net/core/dev.c:4021
 #10: ffff8880a208d148 (&dev->qdisc_running_key#168){+...}-{0:0}, at: neigh_resolve_output net/core/neighbour.c:1489 [inline]
 #10: ffff8880a208d148 (&dev->qdisc_running_key#168){+...}-{0:0}, at: neigh_resolve_output+0x566/0x930 net/core/neighbour.c:1469

stack backtrace:
CPU: 1 PID: 13161 Comm: syz-executor.4 Not tainted 5.7.0-rc1-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
Call Trace:
 __dump_stack lib/dump_stack.c:77 [inline]
 dump_stack+0x188/0x20d lib/dump_stack.c:118
 check_noncircular+0x32e/0x3e0 kernel/locking/lockdep.c:1846
 check_prev_add kernel/locking/lockdep.c:2515 [inline]
 check_prevs_add kernel/locking/lockdep.c:2620 [inline]
 validate_chain kernel/locking/lockdep.c:3237 [inline]
 __lock_acquire+0x2ab1/0x4c50 kernel/locking/lockdep.c:4355
 lock_acquire+0x1f2/0x8f0 kernel/locking/lockdep.c:4934
 __raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline]
 _raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:151
 spin_lock include/linux/spinlock.h:353 [inline]
 __netif_tx_lock include/linux/netdevice.h:4055 [inline]
 sch_direct_xmit+0x2be/0xc20 net/sched/sch_generic.c:311
 qdisc_restart net/sched/sch_generic.c:376 [inline]
 __qdisc_run+0x4d1/0x17b0 net/sched/sch_generic.c:384
 qdisc_run include/net/pkt_sched.h:134 [inline]
 qdisc_run include/net/pkt_sched.h:126 [inline]
 __dev_xmit_skb net/core/dev.c:3668 [inline]
 __dev_queue_xmit+0x2115/0x30a0 net/core/dev.c:4021
 neigh_resolve_output net/core/neighbour.c:1489 [inline]
 neigh_resolve_output+0x566/0x930 net/core/neighbour.c:1469
 neigh_output include/net/neighbour.h:510 [inline]
 ip6_finish_output2+0x1091/0x25b0 net/ipv6/ip6_output.c:117
 __ip6_finish_output+0x442/0xab0 net/ipv6/ip6_output.c:143
 ip6_finish_output+0x34/0x1f0 net/ipv6/ip6_output.c:153
 NF_HOOK_COND include/linux/netfilter.h:296 [inline]
 ip6_output+0x239/0x810 net/ipv6/ip6_output.c:176
 dst_output include/net/dst.h:435 [inline]
 NF_HOOK include/linux/netfilter.h:307 [inline]
 ndisc_send_skb+0xf40/0x14b0 net/ipv6/ndisc.c:506
 ndisc_send_ns+0x3b0/0x860 net/ipv6/ndisc.c:648
 ndisc_solicit+0x2ed/0x470 net/ipv6/ndisc.c:740
 neigh_probe+0xcc/0x110 net/core/neighbour.c:1009
 __neigh_event_send+0x3d4/0x16d0 net/core/neighbour.c:1170
 neigh_event_send include/net/neighbour.h:444 [inline]
 neigh_resolve_output+0x590/0x930 net/core/neighbour.c:1473
 neigh_output include/net/neighbour.h:510 [inline]
 ip6_finish_output2+0x1091/0x25b0 net/ipv6/ip6_output.c:117
 __ip6_finish_output+0x442/0xab0 net/ipv6/ip6_output.c:143
 ip6_finish_output+0x34/0x1f0 net/ipv6/ip6_output.c:153
 NF_HOOK_COND include/linux/netfilter.h:296 [inline]
 ip6_output+0x239/0x810 net/ipv6/ip6_output.c:176
 dst_output include/net/dst.h:435 [inline]
 ip6_local_out+0xaf/0x1a0 net/ipv6/output_core.c:179
 ip6_send_skb+0xb4/0x340 net/ipv6/ip6_output.c:1865
 ip6_push_pending_frames+0xbd/0xe0 net/ipv6/ip6_output.c:1885
 icmpv6_push_pending_frames+0x33a/0x530 net/ipv6/icmp.c:304
 icmp6_send+0x1b0b/0x23b0 net/ipv6/icmp.c:617
 icmpv6_send+0xde/0x210 net/ipv6/ip6_icmp.c:43
 ip6_link_failure+0x26/0x520 net/ipv6/route.c:2640
 dst_link_failure include/net/dst.h:418 [inline]
 vti6_xmit net/ipv6/ip6_vti.c:537 [inline]
 vti6_tnl_xmit+0xfd4/0x1d30 net/ipv6/ip6_vti.c:576
 __netdev_start_xmit include/linux/netdevice.h:4574 [inline]
 netdev_start_xmit include/linux/netdevice.h:4588 [inline]
 xmit_one net/core/dev.c:3477 [inline]
 dev_hard_start_xmit+0x1a4/0x9b0 net/core/dev.c:3493
 __dev_queue_xmit+0x25e1/0x30a0 net/core/dev.c:4052
 packet_snd net/packet/af_packet.c:2979 [inline]
 packet_sendmsg+0x23cc/0x5ce0 net/packet/af_packet.c:3004
 sock_sendmsg_nosec net/socket.c:652 [inline]
 sock_sendmsg+0xcf/0x120 net/socket.c:672
 ____sys_sendmsg+0x6bf/0x7e0 net/socket.c:2362
 ___sys_sendmsg+0x100/0x170 net/socket.c:2416
 __sys_sendmsg+0xec/0x1b0 net/socket.c:2449
 do_syscall_64+0xf6/0x7d0 arch/x86/entry/common.c:295
 entry_SYSCALL_64_after_hwframe+0x49/0xb3
RIP: 0033:0x45c829
Code: 0d b7 fb ff c3 66 2e 0f 1f 84 00 00 00 00 00 66 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 0f 83 db b6 fb ff c3 66 2e 0f 1f 84 00 00 00 00
RSP: 002b:00007ff9339b0c78 EFLAGS: 00000246 ORIG_RAX: 000000000000002e
RAX: ffffffffffffffda RBX: 0000000000500880 RCX: 000000000045c829
RDX: 0000000000000000 RSI: 0000000020000100 RDI: 0000000000000004
RBP: 000000000078bf00 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 00000000ffffffff
R13: 00000000000009f7 R14: 00000000004ccae4 R15: 00007ff9339b16d4


---
This bug is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzkaller@googlegroups.com.

syzbot will keep track of this bug report. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [syzbot] possible deadlock in sch_direct_xmit (2)
  2020-04-29  0:59 possible deadlock in sch_direct_xmit (2) syzbot
@ 2021-10-29  2:08 ` syzbot
  2023-11-24  0:38 ` [syzbot] [net?] " syzbot
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 6+ messages in thread
From: syzbot @ 2021-10-29  2:08 UTC (permalink / raw)
  To: administracion, davem, hdanton, jhs, jiri, kuba, linux-kernel,
	netdev, syzkaller-bugs, xiyou.wangcong

syzbot has found a reproducer for the following issue on:

HEAD commit:    35392da51b1a Revert "net: hns3: fix pause config problem a..
git tree:       net
console output: https://syzkaller.appspot.com/x/log.txt?x=108cede2b00000
kernel config:  https://syzkaller.appspot.com/x/.config?x=ca74db36f5f0f1c4
dashboard link: https://syzkaller.appspot.com/bug?extid=e18ac85757292b7baf96
compiler:       gcc (Debian 10.2.1-6) 10.2.1 20210110, GNU ld (GNU Binutils for Debian) 2.35.2
syz repro:      https://syzkaller.appspot.com/x/repro.syz?x=15d2f204b00000
C reproducer:   https://syzkaller.appspot.com/x/repro.c?x=112f3f6cb00000

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+e18ac85757292b7baf96@syzkaller.appspotmail.com

============================================
WARNING: possible recursive locking detected
5.15.0-rc6-syzkaller #0 Not tainted
--------------------------------------------
syz-executor023/6539 is trying to acquire lock:
ffff88801c693398 (_xmit_ETHER#2){+.-.}-{2:2}, at: spin_lock include/linux/spinlock.h:363 [inline]
ffff88801c693398 (_xmit_ETHER#2){+.-.}-{2:2}, at: __netif_tx_lock include/linux/netdevice.h:4405 [inline]
ffff88801c693398 (_xmit_ETHER#2){+.-.}-{2:2}, at: sch_direct_xmit+0x30f/0xbc0 net/sched/sch_generic.c:340

but task is already holding lock:
ffff88801d04fc98 (_xmit_ETHER#2){+.-.}-{2:2}, at: spin_lock include/linux/spinlock.h:363 [inline]
ffff88801d04fc98 (_xmit_ETHER#2){+.-.}-{2:2}, at: __netif_tx_lock include/linux/netdevice.h:4405 [inline]
ffff88801d04fc98 (_xmit_ETHER#2){+.-.}-{2:2}, at: sch_direct_xmit+0x30f/0xbc0 net/sched/sch_generic.c:340

other info that might help us debug this:
 Possible unsafe locking scenario:

       CPU0
       ----
  lock(_xmit_ETHER#2);
  lock(_xmit_ETHER#2);

 *** DEADLOCK ***

 May be due to missing lock nesting notation

7 locks held by syz-executor023/6539:
 #0: ffffffff8b981ac0 (rcu_read_lock_bh){....}-{1:2}, at: lwtunnel_xmit_redirect include/net/lwtunnel.h:95 [inline]
 #0: ffffffff8b981ac0 (rcu_read_lock_bh){....}-{1:2}, at: ip_finish_output2+0x28b/0x2140 net/ipv4/ip_output.c:207
 #1: ffffffff8b981ac0 (rcu_read_lock_bh){....}-{1:2}, at: __dev_queue_xmit+0x1d5/0x36e0 net/core/dev.c:4143
 #2: ffff88801a4f5258 (dev->qdisc_tx_busylock ?: &qdisc_tx_busylock){+...}-{2:2}, at: spin_trylock include/linux/spinlock.h:373 [inline]
 #2: ffff88801a4f5258 (dev->qdisc_tx_busylock ?: &qdisc_tx_busylock){+...}-{2:2}, at: qdisc_run_begin include/net/sch_generic.h:173 [inline]
 #2: ffff88801a4f5258 (dev->qdisc_tx_busylock ?: &qdisc_tx_busylock){+...}-{2:2}, at: __dev_xmit_skb net/core/dev.c:3796 [inline]
 #2: ffff88801a4f5258 (dev->qdisc_tx_busylock ?: &qdisc_tx_busylock){+...}-{2:2}, at: __dev_queue_xmit+0x1222/0x36e0 net/core/dev.c:4177
 #3: ffff88801d04fc98 (_xmit_ETHER#2){+.-.}-{2:2}, at: spin_lock include/linux/spinlock.h:363 [inline]
 #3: ffff88801d04fc98 (_xmit_ETHER#2){+.-.}-{2:2}, at: __netif_tx_lock include/linux/netdevice.h:4405 [inline]
 #3: ffff88801d04fc98 (_xmit_ETHER#2){+.-.}-{2:2}, at: sch_direct_xmit+0x30f/0xbc0 net/sched/sch_generic.c:340
 #4: ffffffff8b981ac0 (rcu_read_lock_bh){....}-{1:2}, at: lwtunnel_xmit_redirect include/net/lwtunnel.h:95 [inline]
 #4: ffffffff8b981ac0 (rcu_read_lock_bh){....}-{1:2}, at: ip_finish_output2+0x28b/0x2140 net/ipv4/ip_output.c:207
 #5: ffffffff8b981ac0 (rcu_read_lock_bh){....}-{1:2}, at: __dev_queue_xmit+0x1d5/0x36e0 net/core/dev.c:4143
 #6: ffff88807762e258 (dev->qdisc_tx_busylock ?: &qdisc_tx_busylock){+...}-{2:2}, at: spin_trylock include/linux/spinlock.h:373 [inline]
 #6: ffff88807762e258 (dev->qdisc_tx_busylock ?: &qdisc_tx_busylock){+...}-{2:2}, at: qdisc_run_begin include/net/sch_generic.h:173 [inline]
 #6: ffff88807762e258 (dev->qdisc_tx_busylock ?: &qdisc_tx_busylock){+...}-{2:2}, at: __dev_xmit_skb net/core/dev.c:3796 [inline]
 #6: ffff88807762e258 (dev->qdisc_tx_busylock ?: &qdisc_tx_busylock){+...}-{2:2}, at: __dev_queue_xmit+0x1222/0x36e0 net/core/dev.c:4177

stack backtrace:
CPU: 0 PID: 6539 Comm: syz-executor023 Not tainted 5.15.0-rc6-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
Call Trace:
 __dump_stack lib/dump_stack.c:88 [inline]
 dump_stack_lvl+0xcd/0x134 lib/dump_stack.c:106
 print_deadlock_bug kernel/locking/lockdep.c:2944 [inline]
 check_deadlock kernel/locking/lockdep.c:2987 [inline]
 validate_chain kernel/locking/lockdep.c:3776 [inline]
 __lock_acquire.cold+0x149/0x3ab kernel/locking/lockdep.c:5015
 lock_acquire kernel/locking/lockdep.c:5625 [inline]
 lock_acquire+0x1ab/0x510 kernel/locking/lockdep.c:5590
 __raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline]
 _raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:154
 spin_lock include/linux/spinlock.h:363 [inline]
 __netif_tx_lock include/linux/netdevice.h:4405 [inline]
 sch_direct_xmit+0x30f/0xbc0 net/sched/sch_generic.c:340
 __dev_xmit_skb net/core/dev.c:3809 [inline]
 __dev_queue_xmit+0x1489/0x36e0 net/core/dev.c:4177
 neigh_resolve_output net/core/neighbour.c:1492 [inline]
 neigh_resolve_output+0x50e/0x820 net/core/neighbour.c:1472
 neigh_output include/net/neighbour.h:510 [inline]
 ip_finish_output2+0x813/0x2140 net/ipv4/ip_output.c:221
 __ip_finish_output net/ipv4/ip_output.c:299 [inline]
 __ip_finish_output+0x396/0x640 net/ipv4/ip_output.c:281
 ip_finish_output+0x32/0x200 net/ipv4/ip_output.c:309
 NF_HOOK_COND include/linux/netfilter.h:296 [inline]
 ip_output+0x196/0x310 net/ipv4/ip_output.c:423
 dst_output include/net/dst.h:450 [inline]
 ip_local_out+0xaf/0x1a0 net/ipv4/ip_output.c:126
 iptunnel_xmit+0x628/0xa50 net/ipv4/ip_tunnel_core.c:82
 ip_tunnel_xmit+0x10a6/0x2b60 net/ipv4/ip_tunnel.c:810
 erspan_xmit+0x7e2/0x29c0 net/ipv4/ip_gre.c:712
 __netdev_start_xmit include/linux/netdevice.h:4988 [inline]
 netdev_start_xmit include/linux/netdevice.h:5002 [inline]
 xmit_one net/core/dev.c:3582 [inline]
 dev_hard_start_xmit+0x1eb/0x920 net/core/dev.c:3598
 sch_direct_xmit+0x19f/0xbc0 net/sched/sch_generic.c:342
 __dev_xmit_skb net/core/dev.c:3809 [inline]
 __dev_queue_xmit+0x1489/0x36e0 net/core/dev.c:4177
 neigh_resolve_output net/core/neighbour.c:1492 [inline]
 neigh_resolve_output+0x50e/0x820 net/core/neighbour.c:1472
 neigh_output include/net/neighbour.h:510 [inline]
 ip_finish_output2+0x813/0x2140 net/ipv4/ip_output.c:221
 __ip_finish_output net/ipv4/ip_output.c:299 [inline]
 __ip_finish_output+0x396/0x640 net/ipv4/ip_output.c:281
 ip_finish_output+0x32/0x200 net/ipv4/ip_output.c:309
 NF_HOOK_COND include/linux/netfilter.h:296 [inline]
 ip_output+0x196/0x310 net/ipv4/ip_output.c:423
 dst_output include/net/dst.h:450 [inline]
 ip_local_out net/ipv4/ip_output.c:126 [inline]
 ip_send_skb+0xd4/0x260 net/ipv4/ip_output.c:1555
 udp_send_skb+0x6cd/0x11a0 net/ipv4/udp.c:967
 udp_sendmsg+0x1bad/0x2740 net/ipv4/udp.c:1254
 udpv6_sendmsg+0x14f6/0x2c40 net/ipv6/udp.c:1360
 inet6_sendmsg+0x99/0xe0 net/ipv6/af_inet6.c:643
 sock_sendmsg_nosec net/socket.c:704 [inline]
 sock_sendmsg+0xcf/0x120 net/socket.c:724
 ____sys_sendmsg+0x331/0x810 net/socket.c:2409
 ___sys_sendmsg+0xf3/0x170 net/socket.c:2463
 __sys_sendmmsg+0x195/0x470 net/socket.c:2549
 __do_sys_sendmmsg net/socket.c:2578 [inline]
 __se_sys_sendmmsg net/socket.c:2575 [inline]
 __x64_sys_sendmmsg+0x99/0x100 net/socket.c:2575
 do_syscall_x64 arch/x86/entry/common.c:50 [inline]
 do_syscall_64+0x35/0xb0 arch/x86/entry/common.c:80
 entry_SYSCALL_64_after_hwframe+0x44/0xae
RIP: 0033:0x7f728d0d9aa9
Code: 28 00 00 00 75 05 48 83 c4 28 c3 e8 41 15 00 00 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 c0 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007ffda2643b88 EFLAGS: 00000246 ORIG_RAX: 0000000000000133
RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f728d0d9aa9
RDX: 0000000000000001 RSI: 0000000020004d8


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [syzbot] [net?] possible deadlock in sch_direct_xmit (2)
  2020-04-29  0:59 possible deadlock in sch_direct_xmit (2) syzbot
  2021-10-29  2:08 ` [syzbot] " syzbot
@ 2023-11-24  0:38 ` syzbot
  2023-11-26  6:50 ` [syzbot] [net?] possible deadlock in sch_direct_xmit syzbot
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 6+ messages in thread
From: syzbot @ 2023-11-24  0:38 UTC (permalink / raw)
  To: administracion, ap420073, davem, edumazet, hdanton, jhs, jiri,
	kuba, linux-kernel, netdev, pabeni, syzkaller-bugs,
	xiyou.wangcong

syzbot has bisected this issue to:

commit 1a33e10e4a95cb109ff1145098175df3113313ef
Author: Cong Wang <xiyou.wangcong@gmail.com>
Date:   Sun May 3 05:22:19 2020 +0000

    net: partially revert dynamic lockdep key changes

bisection log:  https://syzkaller.appspot.com/x/bisect.txt?x=17cd55af680000
start commit:   feb9c5e19e91 Merge tag 'for_linus' of git://git.kernel.org..
git tree:       upstream
final oops:     https://syzkaller.appspot.com/x/report.txt?x=142d55af680000
console output: https://syzkaller.appspot.com/x/log.txt?x=102d55af680000
kernel config:  https://syzkaller.appspot.com/x/.config?x=78013caa620443d6
dashboard link: https://syzkaller.appspot.com/bug?extid=e18ac85757292b7baf96
syz repro:      https://syzkaller.appspot.com/x/repro.syz?x=14430eb9f00000
C reproducer:   https://syzkaller.appspot.com/x/repro.c?x=13738f71f00000

Reported-by: syzbot+e18ac85757292b7baf96@syzkaller.appspotmail.com
Fixes: 1a33e10e4a95 ("net: partially revert dynamic lockdep key changes")

For information about bisection process see: https://goo.gl/tpsmEJ#bisection

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [syzbot] [net?] possible deadlock in sch_direct_xmit
  2020-04-29  0:59 possible deadlock in sch_direct_xmit (2) syzbot
  2021-10-29  2:08 ` [syzbot] " syzbot
  2023-11-24  0:38 ` [syzbot] [net?] " syzbot
@ 2023-11-26  6:50 ` syzbot
  2023-11-26  9:46 ` syzbot
  2023-11-27 12:10 ` syzbot
  4 siblings, 0 replies; 6+ messages in thread
From: syzbot @ 2023-11-26  6:50 UTC (permalink / raw)
  To: linux-kernel

For archival purposes, forwarding an incoming command email to
linux-kernel@vger.kernel.org.

***

Subject: [net?] possible deadlock in sch_direct_xmit
Author: eadavis@qq.com

please test deadlock in sch_direct_xmit

#syz test https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master

diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c
index 4195a4bc26ca..4605314e605e 100644
--- a/net/sched/sch_generic.c
+++ b/net/sched/sch_generic.c
@@ -338,8 +338,11 @@ bool sch_direct_xmit(struct sk_buff *skb, struct Qdisc *q,
 
 	if (likely(skb)) {
 		HARD_TX_LOCK(dev, txq, smp_processor_id());
-		if (!netif_xmit_frozen_or_stopped(txq))
+		if (!netif_xmit_frozen_or_stopped(txq)) {
+			netif_tx_stop_queue(txq);
 			skb = dev_hard_start_xmit(skb, dev, txq, &ret);
+			netif_tx_start_queue(txq);
+		}
 		else
 			qdisc_maybe_clear_missed(q, txq);
 


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [syzbot] [net?] possible deadlock in sch_direct_xmit
  2020-04-29  0:59 possible deadlock in sch_direct_xmit (2) syzbot
                   ` (2 preceding siblings ...)
  2023-11-26  6:50 ` [syzbot] [net?] possible deadlock in sch_direct_xmit syzbot
@ 2023-11-26  9:46 ` syzbot
  2023-11-27 12:10 ` syzbot
  4 siblings, 0 replies; 6+ messages in thread
From: syzbot @ 2023-11-26  9:46 UTC (permalink / raw)
  To: linux-kernel

For archival purposes, forwarding an incoming command email to
linux-kernel@vger.kernel.org.

***

Subject: [net?] possible deadlock in sch_direct_xmit
Author: eadavis@qq.com

please test deadlock in sch_direct_xmit

#syz test https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master

diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c
index 4195a4bc26ca..d9d39887a550 100644
--- a/net/sched/sch_generic.c
+++ b/net/sched/sch_generic.c
@@ -337,13 +337,16 @@ bool sch_direct_xmit(struct sk_buff *skb, struct Qdisc *q,
 #endif
 
 	if (likely(skb)) {
-		HARD_TX_LOCK(dev, txq, smp_processor_id());
-		if (!netif_xmit_frozen_or_stopped(txq))
+		if (!netif_xmit_frozen_or_stopped(txq)) {
+			HARD_TX_LOCK(dev, txq, smp_processor_id());
+			netif_tx_stop_queue(txq);
 			skb = dev_hard_start_xmit(skb, dev, txq, &ret);
+			netif_tx_start_queue(txq);
+			HARD_TX_UNLOCK(dev, txq);
+		}
 		else
 			qdisc_maybe_clear_missed(q, txq);
 
-		HARD_TX_UNLOCK(dev, txq);
 	} else {
 		if (root_lock)
 			spin_lock(root_lock);


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [syzbot] [net?] possible deadlock in sch_direct_xmit
  2020-04-29  0:59 possible deadlock in sch_direct_xmit (2) syzbot
                   ` (3 preceding siblings ...)
  2023-11-26  9:46 ` syzbot
@ 2023-11-27 12:10 ` syzbot
  4 siblings, 0 replies; 6+ messages in thread
From: syzbot @ 2023-11-27 12:10 UTC (permalink / raw)
  To: linux-kernel

For archival purposes, forwarding an incoming command email to
linux-kernel@vger.kernel.org.

***

Subject: [net?] possible deadlock in sch_direct_xmit
Author: eadavis@qq.com

please test deadlock in sch_direct_xmit

#syz test https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master

diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c
index 4195a4bc26ca..9e418f94757d 100644
--- a/net/sched/sch_generic.c
+++ b/net/sched/sch_generic.c
@@ -337,13 +337,16 @@ bool sch_direct_xmit(struct sk_buff *skb, struct Qdisc *q,
 #endif
 
 	if (likely(skb)) {
-		HARD_TX_LOCK(dev, txq, smp_processor_id());
-		if (!netif_xmit_frozen_or_stopped(txq))
-			skb = dev_hard_start_xmit(skb, dev, txq, &ret);
-		else
-			qdisc_maybe_clear_missed(q, txq);
+		int cpu = smp_processor_id();
+		if (READ_ONCE(txq->xmit_lock_owner) != cpu) {
+			HARD_TX_LOCK(dev, txq, cpu);
+			if (!netif_xmit_frozen_or_stopped(txq))
+				skb = dev_hard_start_xmit(skb, dev, txq, &ret);
+			else
+				qdisc_maybe_clear_missed(q, txq);
 
-		HARD_TX_UNLOCK(dev, txq);
+			HARD_TX_UNLOCK(dev, txq);
+		}
 	} else {
 		if (root_lock)
 			spin_lock(root_lock);


^ permalink raw reply related	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2023-11-27 12:10 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-04-29  0:59 possible deadlock in sch_direct_xmit (2) syzbot
2021-10-29  2:08 ` [syzbot] " syzbot
2023-11-24  0:38 ` [syzbot] [net?] " syzbot
2023-11-26  6:50 ` [syzbot] [net?] possible deadlock in sch_direct_xmit syzbot
2023-11-26  9:46 ` syzbot
2023-11-27 12:10 ` syzbot

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).