* INFO: rcu detected stall in __schedule @ 2018-05-02 17:56 ` syzbot 0 siblings, 0 replies; 6+ messages in thread From: syzbot @ 2018-05-02 17:56 UTC (permalink / raw) To: linux-kernel, linux-ppp, netdev, paulus, syzkaller-bugs Hello, syzbot found the following crash on: HEAD commit: f2125992e7cb Merge tag 'xfs-4.17-fixes-1' of git://git.kern... git tree: upstream console output: https://syzkaller.appspot.com/x/log.txt?id=4755940087693312 kernel config: https://syzkaller.appspot.com/x/.config?id=6493557782959164711 dashboard link: https://syzkaller.appspot.com/bug?extid=f16b3e3512a1e3c1d1f6 compiler: gcc (GCC) 8.0.1 20180413 (experimental) Unfortunately, I don't have any reproducer for this crash yet. IMPORTANT: if you fix the bug, please add the following tag to the commit: Reported-by: syzbot+f16b3e3512a1e3c1d1f6@syzkaller.appspotmail.com do_dccp_getsockopt: sockopt(PACKET_SIZE) is deprecated: fix your app do_dccp_getsockopt: sockopt(PACKET_SIZE) is deprecated: fix your app ntfs: (device loop6): parse_options(): Unrecognized mount option error�n��uldip. INFO: rcu_sched self-detected stall on CPU 0-...!: (125000 ticks this GP) idle=f3e/1/4611686018427387906 softirq=112858/112858 fqs=0 (t=125000 jiffies g=61626 c=61625 q=1534) rcu_sched kthread starved for 125000 jiffies! g61626 c61625 f0x0 RCU_GP_WAIT_FQS(3) ->state=0x402 ->cpu=0 RCU grace-period kthread stack dump: rcu_sched I23592 9 2 0x80000000 Call Trace: context_switch kernel/sched/core.c:2848 [inline] __schedule+0x801/0x1e30 kernel/sched/core.c:3490 schedule+0xef/0x430 kernel/sched/core.c:3549 schedule_timeout+0x138/0x240 kernel/time/timer.c:1801 rcu_gp_kthread+0x6b5/0x1940 kernel/rcu/tree.c:2231 kthread+0x345/0x410 kernel/kthread.c:238 ret_from_fork+0x3a/0x50 arch/x86/entry/entry_64.S:412 NMI backtrace for cpu 0 CPU: 0 PID: 26694 Comm: syz-executor1 Not tainted 4.17.0-rc3+ #28 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 Call Trace: <IRQ> __dump_stack lib/dump_stack.c:77 [inline] dump_stack+0x1b9/0x294 lib/dump_stack.c:113 nmi_cpu_backtrace.cold.4+0x19/0xce lib/nmi_backtrace.c:103 nmi_trigger_cpumask_backtrace+0x151/0x192 lib/nmi_backtrace.c:62 arch_trigger_cpumask_backtrace+0x14/0x20 arch/x86/kernel/apic/hw_nmi.c:38 trigger_single_cpu_backtrace include/linux/nmi.h:156 [inline] rcu_dump_cpu_stacks+0x175/0x1c2 kernel/rcu/tree.c:1376 print_cpu_stall kernel/rcu/tree.c:1525 [inline] check_cpu_stall.isra.61.cold.80+0x36c/0x59a kernel/rcu/tree.c:1593 __rcu_pending kernel/rcu/tree.c:3356 [inline] rcu_pending kernel/rcu/tree.c:3401 [inline] rcu_check_callbacks+0x21b/0xad0 kernel/rcu/tree.c:2763 update_process_times+0x2d/0x70 kernel/time/timer.c:1636 tick_sched_handle+0x9f/0x180 kernel/time/tick-sched.c:164 tick_sched_timer+0x45/0x130 kernel/time/tick-sched.c:1274 __run_hrtimer kernel/time/hrtimer.c:1398 [inline] __hrtimer_run_queues+0x3e3/0x10a0 kernel/time/hrtimer.c:1460 hrtimer_interrupt+0x2f3/0x750 kernel/time/hrtimer.c:1518 local_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1025 [inline] smp_apic_timer_interrupt+0x15d/0x710 arch/x86/kernel/apic/apic.c:1050 apic_timer_interrupt+0xf/0x20 arch/x86/entry/entry_64.S:863 </IRQ> RIP: 0010:arch_local_irq_enable arch/x86/include/asm/paravirt.h:793 [inline] RIP: 0010:__raw_spin_unlock_irq include/linux/spinlock_api_smp.h:168 [inline] RIP: 0010:_raw_spin_unlock_irq+0x56/0x70 kernel/locking/spinlock.c:192 RSP: 0018:ffff8801b3dbf438 EFLAGS: 00000282 ORIG_RAX: ffffffffffffff13 RAX: dffffc0000000000 RBX: ffff8801dae2c680 RCX: 1ffff100361ebd4d RDX: 1ffffffff11a316f RSI: ffff8801b0f5ea48 RDI: ffffffff88d18b78 RBP: ffff8801b3dbf440 R08: ffff8801b0f5e9f8 R09: 0000000000000006 R10: ffff8801b0f5e1c0 R11: 0000000000000000 R12: ffff8801b0f5e1c0 R13: ffff8801b0f5e7a0 R14: dffffc0000000000 R15: ffff8801b0f5e1c0 rq_unlock_irq kernel/sched/sched.h:1824 [inline] __schedule+0x144f/0x1e30 kernel/sched/core.c:3493 schedule+0xef/0x430 kernel/sched/core.c:3549 do_sched_yield+0x187/0x240 kernel/sched/core.c:4965 yield+0xa5/0xe0 kernel/sched/core.c:5054 tasklet_kill+0x4e/0xd0 kernel/softirq.c:559 ppp_asynctty_close+0x9e/0x150 drivers/net/ppp/ppp_async.c:239 ppp_asynctty_hangup+0x15/0x20 drivers/net/ppp/ppp_async.c:256 tty_ldisc_hangup+0x138/0x640 drivers/tty/tty_ldisc.c:730 __tty_hangup.part.21+0x2da/0x6e0 drivers/tty/tty_io.c:621 __tty_hangup drivers/tty/tty_io.c:571 [inline] tty_vhangup+0x21/0x30 drivers/tty/tty_io.c:694 pty_close+0x3bd/0x510 drivers/tty/pty.c:78 tty_release+0x494/0x12e0 drivers/tty/tty_io.c:1656 __fput+0x34d/0x890 fs/file_table.c:209 ____fput+0x15/0x20 fs/file_table.c:243 task_work_run+0x1e4/0x290 kernel/task_work.c:113 tracehook_notify_resume include/linux/tracehook.h:191 [inline] exit_to_usermode_loop+0x2bd/0x310 arch/x86/entry/common.c:166 prepare_exit_to_usermode arch/x86/entry/common.c:196 [inline] syscall_return_slowpath arch/x86/entry/common.c:265 [inline] do_syscall_64+0x6ac/0x800 arch/x86/entry/common.c:290 entry_SYSCALL_64_after_hwframe+0x49/0xbe RIP: 0033:0x455979 RSP: 002b:00007f92c3751c68 EFLAGS: 00000246 ORIG_RAX: 0000000000000003 RAX: 0000000000000000 RBX: 00007f92c37526d4 RCX: 0000000000455979 RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000013 RBP: 000000000072bf50 R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000246 R12: 00000000ffffffff R13: 0000000000000053 R14: 00000000006f4868 R15: 0000000000000001 --- This bug is generated by a bot. It may contain errors. See https://goo.gl/tpsmEJ for more information about syzbot. syzbot engineers can be reached at syzkaller@googlegroups.com. syzbot will keep track of this bug report. If you forgot to add the Reported-by tag, once the fix for this bug is merged into any tree, please reply to this email with: #syz fix: exact-commit-title To mark this as a duplicate of another syzbot report, please reply with: #syz dup: exact-subject-of-another-report If it's a one-off invalid bug report, please reply with: #syz invalid Note: if the crash happens again, it will cause creation of a new bug report. Note: all commands must start from beginning of the line in the email body. ^ permalink raw reply [flat|nested] 6+ messages in thread
* INFO: rcu detected stall in __schedule @ 2018-05-02 17:56 ` syzbot 0 siblings, 0 replies; 6+ messages in thread From: syzbot @ 2018-05-02 17:56 UTC (permalink / raw) To: linux-kernel, linux-ppp, netdev, paulus, syzkaller-bugs SGVsbG8sDQoNCnN5emJvdCBmb3VuZCB0aGUgZm9sbG93aW5nIGNyYXNoIG9uOg0KDQpIRUFEIGNv bW1pdDogICAgZjIxMjU5OTJlN2NiIE1lcmdlIHRhZyAneGZzLTQuMTctZml4ZXMtMScgb2YgIA0K Z2l0Oi8vZ2l0Lmtlcm4uLi4NCmdpdCB0cmVlOiAgICAgICB1cHN0cmVhbQ0KY29uc29sZSBvdXRw dXQ6IGh0dHBzOi8vc3l6a2FsbGVyLmFwcHNwb3QuY29tL3gvbG9nLnR4dD9pZD00NzU1OTQwMDg3 NjkzMzEyDQprZXJuZWwgY29uZmlnOiAgIA0KaHR0cHM6Ly9zeXprYWxsZXIuYXBwc3BvdC5jb20v eC8uY29uZmlnP2lkPTY0OTM1NTc3ODI5NTkxNjQ3MTENCmRhc2hib2FyZCBsaW5rOiBodHRwczov L3N5emthbGxlci5hcHBzcG90LmNvbS9idWc/ZXh0aWQ9ZjE2YjNlMzUxMmExZTNjMWQxZjYNCmNv bXBpbGVyOiAgICAgICBnY2MgKEdDQykgOC4wLjEgMjAxODA0MTMgKGV4cGVyaW1lbnRhbCkNCg0K VW5mb3J0dW5hdGVseSwgSSBkb24ndCBoYXZlIGFueSByZXByb2R1Y2VyIGZvciB0aGlzIGNyYXNo IHlldC4NCg0KSU1QT1JUQU5UOiBpZiB5b3UgZml4IHRoZSBidWcsIHBsZWFzZSBhZGQgdGhlIGZv bGxvd2luZyB0YWcgdG8gdGhlIGNvbW1pdDoNClJlcG9ydGVkLWJ5OiBzeXpib3QrZjE2YjNlMzUx MmExZTNjMWQxZjZAc3l6a2FsbGVyLmFwcHNwb3RtYWlsLmNvbQ0KDQpkb19kY2NwX2dldHNvY2tv cHQ6IHNvY2tvcHQoUEFDS0VUX1NJWkUpIGlzIGRlcHJlY2F0ZWQ6IGZpeCB5b3VyIGFwcA0KZG9f ZGNjcF9nZXRzb2Nrb3B0OiBzb2Nrb3B0KFBBQ0tFVF9TSVpFKSBpcyBkZXByZWNhdGVkOiBmaXgg eW91ciBhcHANCm50ZnM6IChkZXZpY2UgbG9vcDYpOiBwYXJzZV9vcHRpb25zKCk6IFVucmVjb2du aXplZCBtb3VudCBvcHRpb24gIA0KZXJyb3Lvv71u77+977+9dWxkaXAuDQpJTkZPOiByY3Vfc2No ZWQgc2VsZi1kZXRlY3RlZCBzdGFsbCBvbiBDUFUNCgkwLS4uLiE6ICgxMjUwMDAgdGlja3MgdGhp cyBHUCkgaWRsZT1mM2UvMS80NjExNjg2MDE4NDI3Mzg3OTA2ICANCnNvZnRpcnE9MTEyODU4LzEx Mjg1OCBmcXM9MA0KCSAodD0xMjUwMDAgamlmZmllcyBnPTYxNjI2IGM9NjE2MjUgcT0xNTM0KQ0K cmN1X3NjaGVkIGt0aHJlYWQgc3RhcnZlZCBmb3IgMTI1MDAwIGppZmZpZXMhIGc2MTYyNiBjNjE2 MjUgZjB4MCAgDQpSQ1VfR1BfV0FJVF9GUVMoMykgLT5zdGF0ZT0weDQwMiAtPmNwdT0wDQpSQ1Ug Z3JhY2UtcGVyaW9kIGt0aHJlYWQgc3RhY2sgZHVtcDoNCnJjdV9zY2hlZCAgICAgICBJMjM1OTIg ICAgIDkgICAgICAyIDB4ODAwMDAwMDANCkNhbGwgVHJhY2U6DQogIGNvbnRleHRfc3dpdGNoIGtl cm5lbC9zY2hlZC9jb3JlLmM6Mjg0OCBbaW5saW5lXQ0KICBfX3NjaGVkdWxlKzB4ODAxLzB4MWUz MCBrZXJuZWwvc2NoZWQvY29yZS5jOjM0OTANCiAgc2NoZWR1bGUrMHhlZi8weDQzMCBrZXJuZWwv c2NoZWQvY29yZS5jOjM1NDkNCiAgc2NoZWR1bGVfdGltZW91dCsweDEzOC8weDI0MCBrZXJuZWwv dGltZS90aW1lci5jOjE4MDENCiAgcmN1X2dwX2t0aHJlYWQrMHg2YjUvMHgxOTQwIGtlcm5lbC9y Y3UvdHJlZS5jOjIyMzENCiAga3RocmVhZCsweDM0NS8weDQxMCBrZXJuZWwva3RocmVhZC5jOjIz OA0KICByZXRfZnJvbV9mb3JrKzB4M2EvMHg1MCBhcmNoL3g4Ni9lbnRyeS9lbnRyeV82NC5TOjQx Mg0KTk1JIGJhY2t0cmFjZSBmb3IgY3B1IDANCkNQVTogMCBQSUQ6IDI2Njk0IENvbW06IHN5ei1l eGVjdXRvcjEgTm90IHRhaW50ZWQgNC4xNy4wLXJjMysgIzI4DQpIYXJkd2FyZSBuYW1lOiBHb29n bGUgR29vZ2xlIENvbXB1dGUgRW5naW5lL0dvb2dsZSBDb21wdXRlIEVuZ2luZSwgQklPUyAgDQpH b29nbGUgMDEvMDEvMjAxMQ0KQ2FsbCBUcmFjZToNCiAgPElSUT4NCiAgX19kdW1wX3N0YWNrIGxp Yi9kdW1wX3N0YWNrLmM6NzcgW2lubGluZV0NCiAgZHVtcF9zdGFjaysweDFiOS8weDI5NCBsaWIv ZHVtcF9zdGFjay5jOjExMw0KICBubWlfY3B1X2JhY2t0cmFjZS5jb2xkLjQrMHgxOS8weGNlIGxp Yi9ubWlfYmFja3RyYWNlLmM6MTAzDQogIG5taV90cmlnZ2VyX2NwdW1hc2tfYmFja3RyYWNlKzB4 MTUxLzB4MTkyIGxpYi9ubWlfYmFja3RyYWNlLmM6NjINCiAgYXJjaF90cmlnZ2VyX2NwdW1hc2tf YmFja3RyYWNlKzB4MTQvMHgyMCBhcmNoL3g4Ni9rZXJuZWwvYXBpYy9od19ubWkuYzozOA0KICB0 cmlnZ2VyX3NpbmdsZV9jcHVfYmFja3RyYWNlIGluY2x1ZGUvbGludXgvbm1pLmg6MTU2IFtpbmxp bmVdDQogIHJjdV9kdW1wX2NwdV9zdGFja3MrMHgxNzUvMHgxYzIga2VybmVsL3JjdS90cmVlLmM6 MTM3Ng0KICBwcmludF9jcHVfc3RhbGwga2VybmVsL3JjdS90cmVlLmM6MTUyNSBbaW5saW5lXQ0K ICBjaGVja19jcHVfc3RhbGwuaXNyYS42MS5jb2xkLjgwKzB4MzZjLzB4NTlhIGtlcm5lbC9yY3Uv dHJlZS5jOjE1OTMNCiAgX19yY3VfcGVuZGluZyBrZXJuZWwvcmN1L3RyZWUuYzozMzU2IFtpbmxp bmVdDQogIHJjdV9wZW5kaW5nIGtlcm5lbC9yY3UvdHJlZS5jOjM0MDEgW2lubGluZV0NCiAgcmN1 X2NoZWNrX2NhbGxiYWNrcysweDIxYi8weGFkMCBrZXJuZWwvcmN1L3RyZWUuYzoyNzYzDQogIHVw ZGF0ZV9wcm9jZXNzX3RpbWVzKzB4MmQvMHg3MCBrZXJuZWwvdGltZS90aW1lci5jOjE2MzYNCiAg dGlja19zY2hlZF9oYW5kbGUrMHg5Zi8weDE4MCBrZXJuZWwvdGltZS90aWNrLXNjaGVkLmM6MTY0 DQogIHRpY2tfc2NoZWRfdGltZXIrMHg0NS8weDEzMCBrZXJuZWwvdGltZS90aWNrLXNjaGVkLmM6 MTI3NA0KICBfX3J1bl9ocnRpbWVyIGtlcm5lbC90aW1lL2hydGltZXIuYzoxMzk4IFtpbmxpbmVd DQogIF9faHJ0aW1lcl9ydW5fcXVldWVzKzB4M2UzLzB4MTBhMCBrZXJuZWwvdGltZS9ocnRpbWVy LmM6MTQ2MA0KICBocnRpbWVyX2ludGVycnVwdCsweDJmMy8weDc1MCBrZXJuZWwvdGltZS9ocnRp bWVyLmM6MTUxOA0KICBsb2NhbF9hcGljX3RpbWVyX2ludGVycnVwdCBhcmNoL3g4Ni9rZXJuZWwv YXBpYy9hcGljLmM6MTAyNSBbaW5saW5lXQ0KICBzbXBfYXBpY190aW1lcl9pbnRlcnJ1cHQrMHgx NWQvMHg3MTAgYXJjaC94ODYva2VybmVsL2FwaWMvYXBpYy5jOjEwNTANCiAgYXBpY190aW1lcl9p bnRlcnJ1cHQrMHhmLzB4MjAgYXJjaC94ODYvZW50cnkvZW50cnlfNjQuUzo4NjMNCiAgPC9JUlE+ DQpSSVA6IDAwMTA6YXJjaF9sb2NhbF9pcnFfZW5hYmxlIGFyY2gveDg2L2luY2x1ZGUvYXNtL3Bh cmF2aXJ0Lmg6NzkzIFtpbmxpbmVdDQpSSVA6IDAwMTA6X19yYXdfc3Bpbl91bmxvY2tfaXJxIGlu Y2x1ZGUvbGludXgvc3BpbmxvY2tfYXBpX3NtcC5oOjE2OCAgDQpbaW5saW5lXQ0KUklQOiAwMDEw Ol9yYXdfc3Bpbl91bmxvY2tfaXJxKzB4NTYvMHg3MCBrZXJuZWwvbG9ja2luZy9zcGlubG9jay5j OjE5Mg0KUlNQOiAwMDE4OmZmZmY4ODAxYjNkYmY0MzggRUZMQUdTOiAwMDAwMDI4MiBPUklHX1JB WDogZmZmZmZmZmZmZmZmZmYxMw0KUkFYOiBkZmZmZmMwMDAwMDAwMDAwIFJCWDogZmZmZjg4MDFk YWUyYzY4MCBSQ1g6IDFmZmZmMTAwMzYxZWJkNGQNClJEWDogMWZmZmZmZmZmMTFhMzE2ZiBSU0k6 IGZmZmY4ODAxYjBmNWVhNDggUkRJOiBmZmZmZmZmZjg4ZDE4Yjc4DQpSQlA6IGZmZmY4ODAxYjNk YmY0NDAgUjA4OiBmZmZmODgwMWIwZjVlOWY4IFIwOTogMDAwMDAwMDAwMDAwMDAwNg0KUjEwOiBm ZmZmODgwMWIwZjVlMWMwIFIxMTogMDAwMDAwMDAwMDAwMDAwMCBSMTI6IGZmZmY4ODAxYjBmNWUx YzANClIxMzogZmZmZjg4MDFiMGY1ZTdhMCBSMTQ6IGRmZmZmYzAwMDAwMDAwMDAgUjE1OiBmZmZm ODgwMWIwZjVlMWMwDQogIHJxX3VubG9ja19pcnEga2VybmVsL3NjaGVkL3NjaGVkLmg6MTgyNCBb aW5saW5lXQ0KICBfX3NjaGVkdWxlKzB4MTQ0Zi8weDFlMzAga2VybmVsL3NjaGVkL2NvcmUuYzoz NDkzDQogIHNjaGVkdWxlKzB4ZWYvMHg0MzAga2VybmVsL3NjaGVkL2NvcmUuYzozNTQ5DQogIGRv X3NjaGVkX3lpZWxkKzB4MTg3LzB4MjQwIGtlcm5lbC9zY2hlZC9jb3JlLmM6NDk2NQ0KICB5aWVs ZCsweGE1LzB4ZTAga2VybmVsL3NjaGVkL2NvcmUuYzo1MDU0DQogIHRhc2tsZXRfa2lsbCsweDRl LzB4ZDAga2VybmVsL3NvZnRpcnEuYzo1NTkNCiAgcHBwX2FzeW5jdHR5X2Nsb3NlKzB4OWUvMHgx NTAgZHJpdmVycy9uZXQvcHBwL3BwcF9hc3luYy5jOjIzOQ0KICBwcHBfYXN5bmN0dHlfaGFuZ3Vw KzB4MTUvMHgyMCBkcml2ZXJzL25ldC9wcHAvcHBwX2FzeW5jLmM6MjU2DQogIHR0eV9sZGlzY19o YW5ndXArMHgxMzgvMHg2NDAgZHJpdmVycy90dHkvdHR5X2xkaXNjLmM6NzMwDQogIF9fdHR5X2hh bmd1cC5wYXJ0LjIxKzB4MmRhLzB4NmUwIGRyaXZlcnMvdHR5L3R0eV9pby5jOjYyMQ0KICBfX3R0 eV9oYW5ndXAgZHJpdmVycy90dHkvdHR5X2lvLmM6NTcxIFtpbmxpbmVdDQogIHR0eV92aGFuZ3Vw KzB4MjEvMHgzMCBkcml2ZXJzL3R0eS90dHlfaW8uYzo2OTQNCiAgcHR5X2Nsb3NlKzB4M2JkLzB4 NTEwIGRyaXZlcnMvdHR5L3B0eS5jOjc4DQogIHR0eV9yZWxlYXNlKzB4NDk0LzB4MTJlMCBkcml2 ZXJzL3R0eS90dHlfaW8uYzoxNjU2DQogIF9fZnB1dCsweDM0ZC8weDg5MCBmcy9maWxlX3RhYmxl LmM6MjA5DQogIF9fX19mcHV0KzB4MTUvMHgyMCBmcy9maWxlX3RhYmxlLmM6MjQzDQogIHRhc2tf d29ya19ydW4rMHgxZTQvMHgyOTAga2VybmVsL3Rhc2tfd29yay5jOjExMw0KICB0cmFjZWhvb2tf bm90aWZ5X3Jlc3VtZSBpbmNsdWRlL2xpbnV4L3RyYWNlaG9vay5oOjE5MSBbaW5saW5lXQ0KICBl eGl0X3RvX3VzZXJtb2RlX2xvb3ArMHgyYmQvMHgzMTAgYXJjaC94ODYvZW50cnkvY29tbW9uLmM6 MTY2DQogIHByZXBhcmVfZXhpdF90b191c2VybW9kZSBhcmNoL3g4Ni9lbnRyeS9jb21tb24uYzox OTYgW2lubGluZV0NCiAgc3lzY2FsbF9yZXR1cm5fc2xvd3BhdGggYXJjaC94ODYvZW50cnkvY29t bW9uLmM6MjY1IFtpbmxpbmVdDQogIGRvX3N5c2NhbGxfNjQrMHg2YWMvMHg4MDAgYXJjaC94ODYv ZW50cnkvY29tbW9uLmM6MjkwDQogIGVudHJ5X1NZU0NBTExfNjRfYWZ0ZXJfaHdmcmFtZSsweDQ5 LzB4YmUNClJJUDogMDAzMzoweDQ1NTk3OQ0KUlNQOiAwMDJiOjAwMDA3ZjkyYzM3NTFjNjggRUZM QUdTOiAwMDAwMDI0NiBPUklHX1JBWDogMDAwMDAwMDAwMDAwMDAwMw0KUkFYOiAwMDAwMDAwMDAw MDAwMDAwIFJCWDogMDAwMDdmOTJjMzc1MjZkNCBSQ1g6IDAwMDAwMDAwMDA0NTU5NzkNClJEWDog MDAwMDAwMDAwMDAwMDAwMCBSU0k6IDAwMDAwMDAwMDAwMDAwMDAgUkRJOiAwMDAwMDAwMDAwMDAw MDEzDQpSQlA6IDAwMDAwMDAwMDA3MmJmNTAgUjA4OiAwMDAwMDAwMDAwMDAwMDAwIFIwOTogMDAw MDAwMDAwMDAwMDAwMA0KUjEwOiAwMDAwMDAwMDAwMDAwMDAwIFIxMTogMDAwMDAwMDAwMDAwMDI0 NiBSMTI6IDAwMDAwMDAwZmZmZmZmZmYNClIxMzogMDAwMDAwMDAwMDAwMDA1MyBSMTQ6IDAwMDAw MDAwMDA2ZjQ4NjggUjE1OiAwMDAwMDAwMDAwMDAwMDAxDQoNCg0KLS0tDQpUaGlzIGJ1ZyBpcyBn ZW5lcmF0ZWQgYnkgYSBib3QuIEl0IG1heSBjb250YWluIGVycm9ycy4NClNlZSBodHRwczovL2dv by5nbC90cHNtRUogZm9yIG1vcmUgaW5mb3JtYXRpb24gYWJvdXQgc3l6Ym90Lg0Kc3l6Ym90IGVu Z2luZWVycyBjYW4gYmUgcmVhY2hlZCBhdCBzeXprYWxsZXJAZ29vZ2xlZ3JvdXBzLmNvbS4NCg0K c3l6Ym90IHdpbGwga2VlcCB0cmFjayBvZiB0aGlzIGJ1ZyByZXBvcnQuDQpJZiB5b3UgZm9yZ290 IHRvIGFkZCB0aGUgUmVwb3J0ZWQtYnkgdGFnLCBvbmNlIHRoZSBmaXggZm9yIHRoaXMgYnVnIGlz ICANCm1lcmdlZA0KaW50byBhbnkgdHJlZSwgcGxlYXNlIHJlcGx5IHRvIHRoaXMgZW1haWwgd2l0 aDoNCiNzeXogZml4OiBleGFjdC1jb21taXQtdGl0bGUNClRvIG1hcmsgdGhpcyBhcyBhIGR1cGxp Y2F0ZSBvZiBhbm90aGVyIHN5emJvdCByZXBvcnQsIHBsZWFzZSByZXBseSB3aXRoOg0KI3N5eiBk dXA6IGV4YWN0LXN1YmplY3Qtb2YtYW5vdGhlci1yZXBvcnQNCklmIGl0J3MgYSBvbmUtb2ZmIGlu dmFsaWQgYnVnIHJlcG9ydCwgcGxlYXNlIHJlcGx5IHdpdGg6DQojc3l6IGludmFsaWQNCk5vdGU6 IGlmIHRoZSBjcmFzaCBoYXBwZW5zIGFnYWluLCBpdCB3aWxsIGNhdXNlIGNyZWF0aW9uIG9mIGEg bmV3IGJ1ZyAgDQpyZXBvcnQuDQpOb3RlOiBhbGwgY29tbWFuZHMgbXVzdCBzdGFydCBmcm9tIGJl Z2lubmluZyBvZiB0aGUgbGluZSBpbiB0aGUgZW1haWwgYm9keS4NCg= ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: INFO: rcu detected stall in __schedule 2018-05-02 17:56 ` syzbot @ 2018-05-03 5:45 ` Tetsuo Handa -1 siblings, 0 replies; 6+ messages in thread From: Tetsuo Handa @ 2018-05-03 5:45 UTC (permalink / raw) To: syzbot, syzkaller-bugs, dvyukov; +Cc: linux-kernel, linux-ppp, netdev, paulus I'm not sure whether this is a PPP bug. As of uptime = 484, RCU says that it stalled for 125 seconds. ---------- [ 484.407032] INFO: rcu_sched self-detected stall on CPU [ 484.412488] 0-...!: (125000 ticks this GP) idle=f3e/1/4611686018427387906 softirq=112858/112858 fqs=0 [ 484.422300] (t=125000 jiffies g=61626 c=61625 q=1534) [ 484.427663] rcu_sched kthread starved for 125000 jiffies! g61626 c61625 f0x0 RCU_GP_WAIT_FQS(3) ->state=0x402 ->cpu=0 ---------- 484 - 125 = 359, which was about to start SND related fuzzing in that log. ---------- 2033/05/18 03:36:31 executing program 1: r0 = socket(0x40000a, 0x5, 0x7) setsockopt$inet_int(r0, 0x0, 0x18, &(0x7f0000000000)=0x200, 0x4) bind$inet6(r0, &(0x7f00000000c0)={0xa, 0x0, 0x0, @loopback={0x0, 0x1}}, 0x1c) perf_event_open(&(0x7f0000000040)={0x2, 0x70, 0x3e5}, 0x0, 0xffffffffffffffff, 0xffffffffffffffff, 0x0) timer_create(0x0, &(0x7f00000001c0)={0x0, 0x15, 0x0, @thr={&(0x7f0000000440), &(0x7f0000000540)}}, &(0x7f0000000200)) timer_getoverrun(0x0) perf_event_open(&(0x7f000025c000)={0x2, 0x78, 0x3e3}, 0x0, 0x0, 0xffffffffffffffff, 0x0) r1 = syz_open_dev$sndctrl(&(0x7f0000000200)='/dev/snd/controlC#\x00', 0x2, 0x0) perf_event_open(&(0x7f0000001000)={0x0, 0x70, 0x0, 0x0, 0x0, 0x0, 0x0, 0x8ce, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x7, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0xfffffffffffffff8, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, @perf_bp={&(0x7f0000005000), 0x2}, 0x1000000000c}, 0x0, 0x0, 0xffffffffffffffff, 0x0) ioctl$SNDRV_CTL_IOCTL_SUBSCRIBE_EVENTS(r1, 0xc0045516, &(0x7f00000000c0)=0x1) r2 = syz_open_dev$sndpcmp(&(0x7f0000000100)='/dev/snd/pcmC#D#p\x00', 0x1, 0x4000) ioctl$SNDRV_SEQ_IOCTL_GET_QUEUE_CLIENT(r2, 0xc04c5349, &(0x7f0000000240)={0x200, 0xfffffffffffffcdc, 0x1}) syz_open_dev$tun(&(0x7f00000003c0)='/dev/net/tun\x00', 0x0, 0x20402) ioctl$SNDRV_CTL_IOCTL_PVERSION(r1, 0xc1105517, &(0x7f0000001000)=""/250) ioctl$SNDRV_CTL_IOCTL_SUBSCRIBE_EVENTS(r1, 0xc0045516, &(0x7f0000000000)) 2033/05/18 03:36:31 executing program 4: syz_emit_ethernet(0x3e, &(0x7f00000000c0)={@broadcast=[0xff, 0xff, 0xff, 0xff, 0xff, 0xff], @empty=[0x0, 0x0, 0xb00000000000000], [], {@ipv4={0x800, {{0x5, 0x4, 0x0, 0x0, 0x30, 0x0, 0x0, 0x0, 0x1, 0x0, @remote={0xac, 0x14, 0x14, 0xbb}, @dev={0xac, 0x14, 0x14}}, @icmp=@parameter_prob={0x5, 0x4, 0x0, 0x0, 0x0, 0x0, {0x5, 0x4, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, @local={0xac, 0x223, 0x14, 0xaa}, @dev={0xac, 0x14, 0x14}}}}}}}, &(0x7f0000000000)={0x0, 0x2, [0x0, 0x2e6]}) 2033/05/18 03:36:31 executing program 1: r0 = socket$pppoe(0x18, 0x1, 0x0) connect$pppoe(r0, &(0x7f00000000c0)={0x18, 0x0, {0x1, @broadcast=[0xff, 0xff, 0xff, 0xff, 0xff, 0xff], 'ip6_vti0\x00'}}, 0x1e) r1 = socket(0x3, 0xb, 0x80000001) setsockopt$inet_sctp6_SCTP_ADAPTATION_LAYER(r1, 0x84, 0x7, &(0x7f0000000100)={0x2}, 0x4) ioctl$sock_inet_SIOCGIFADDR(r0, 0x8915, &(0x7f0000000040)={'veth1_to_bridge\x00', {0x2, 0x4e21}}) r2 = syz_open_dev$admmidi(&(0x7f0000000000)='/dev/admmidi#\x00', 0x6, 0x8000) setsockopt$SO_VM_SOCKETS_BUFFER_MAX_SIZE(r2, 0x28, 0x2, &(0x7f0000000080)=0xffffffffffffff00, 0x8) [ 359.306427] snd_virmidi snd_virmidi.0: control 112:0:0:�\b:0 is already present ---------- ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: INFO: rcu detected stall in __schedule @ 2018-05-03 5:45 ` Tetsuo Handa 0 siblings, 0 replies; 6+ messages in thread From: Tetsuo Handa @ 2018-05-03 5:45 UTC (permalink / raw) To: syzbot, syzkaller-bugs, dvyukov; +Cc: linux-kernel, linux-ppp, netdev, paulus I'm not sure whether this is a PPP bug. As of uptime = 484, RCU says that it stalled for 125 seconds. ---------- [ 484.407032] INFO: rcu_sched self-detected stall on CPU [ 484.412488] 0-...!: (125000 ticks this GP) idleóe/1/4611686018427387906 softirq\x112858/112858 fqs=0 [ 484.422300] (t\x125000 jiffies ga626 ca625 q\x1534) [ 484.427663] rcu_sched kthread starved for 125000 jiffies! g61626 c61625 f0x0 RCU_GP_WAIT_FQS(3) ->state=0x402 ->cpu=0 ---------- 484 - 125 = 359, which was about to start SND related fuzzing in that log. ---------- 2033/05/18 03:36:31 executing program 1: r0 = socket(0x40000a, 0x5, 0x7) setsockopt$inet_int(r0, 0x0, 0x18, &(0x7f0000000000)=0x200, 0x4) bind$inet6(r0, &(0x7f00000000c0)={0xa, 0x0, 0x0, @loopback={0x0, 0x1}}, 0x1c) perf_event_open(&(0x7f0000000040)={0x2, 0x70, 0x3e5}, 0x0, 0xffffffffffffffff, 0xffffffffffffffff, 0x0) timer_create(0x0, &(0x7f00000001c0)={0x0, 0x15, 0x0, @thr={&(0x7f0000000440), &(0x7f0000000540)}}, &(0x7f0000000200)) timer_getoverrun(0x0) perf_event_open(&(0x7f000025c000)={0x2, 0x78, 0x3e3}, 0x0, 0x0, 0xffffffffffffffff, 0x0) r1 = syz_open_dev$sndctrl(&(0x7f0000000200)='/dev/snd/controlC#\x00', 0x2, 0x0) perf_event_open(&(0x7f0000001000)={0x0, 0x70, 0x0, 0x0, 0x0, 0x0, 0x0, 0x8ce, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x7, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0xfffffffffffffff8, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, @perf_bp={&(0x7f0000005000), 0x2}, 0x1000000000c}, 0x0, 0x0, 0xffffffffffffffff, 0x0) ioctl$SNDRV_CTL_IOCTL_SUBSCRIBE_EVENTS(r1, 0xc0045516, &(0x7f00000000c0)=0x1) r2 = syz_open_dev$sndpcmp(&(0x7f0000000100)='/dev/snd/pcmC#D#p\x00', 0x1, 0x4000) ioctl$SNDRV_SEQ_IOCTL_GET_QUEUE_CLIENT(r2, 0xc04c5349, &(0x7f0000000240)={0x200, 0xfffffffffffffcdc, 0x1}) syz_open_dev$tun(&(0x7f00000003c0)='/dev/net/tun\x00', 0x0, 0x20402) ioctl$SNDRV_CTL_IOCTL_PVERSION(r1, 0xc1105517, &(0x7f0000001000)=""/250) ioctl$SNDRV_CTL_IOCTL_SUBSCRIBE_EVENTS(r1, 0xc0045516, &(0x7f0000000000)) 2033/05/18 03:36:31 executing program 4: syz_emit_ethernet(0x3e, &(0x7f00000000c0)={@broadcast=[0xff, 0xff, 0xff, 0xff, 0xff, 0xff], @empty=[0x0, 0x0, 0xb00000000000000], [], {@ipv4={0x800, {{0x5, 0x4, 0x0, 0x0, 0x30, 0x0, 0x0, 0x0, 0x1, 0x0, @remote={0xac, 0x14, 0x14, 0xbb}, @dev={0xac, 0x14, 0x14}}, @icmp=@parameter_prob={0x5, 0x4, 0x0, 0x0, 0x0, 0x0, {0x5, 0x4, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, @local={0xac, 0x223, 0x14, 0xaa}, @dev={0xac, 0x14, 0x14}}}}}}}, &(0x7f0000000000)={0x0, 0x2, [0x0, 0x2e6]}) 2033/05/18 03:36:31 executing program 1: r0 = socket$pppoe(0x18, 0x1, 0x0) connect$pppoe(r0, &(0x7f00000000c0)={0x18, 0x0, {0x1, @broadcast=[0xff, 0xff, 0xff, 0xff, 0xff, 0xff], 'ip6_vti0\x00'}}, 0x1e) r1 = socket(0x3, 0xb, 0x80000001) setsockopt$inet_sctp6_SCTP_ADAPTATION_LAYER(r1, 0x84, 0x7, &(0x7f0000000100)={0x2}, 0x4) ioctl$sock_inet_SIOCGIFADDR(r0, 0x8915, &(0x7f0000000040)={'veth1_to_bridge\x00', {0x2, 0x4e21}}) r2 = syz_open_dev$admmidi(&(0x7f0000000000)='/dev/admmidi#\x00', 0x6, 0x8000) setsockopt$SO_VM_SOCKETS_BUFFER_MAX_SIZE(r2, 0x28, 0x2, &(0x7f0000000080)=0xffffffffffffff00, 0x8) [ 359.306427] snd_virmidi snd_virmidi.0: control 112:0:0:�\b:0 is already present ---------- ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: INFO: rcu detected stall in __schedule 2018-05-03 5:45 ` Tetsuo Handa @ 2018-05-03 6:07 ` Dmitry Vyukov -1 siblings, 0 replies; 6+ messages in thread From: Dmitry Vyukov @ 2018-05-03 6:07 UTC (permalink / raw) To: Tetsuo Handa; +Cc: syzbot, syzkaller-bugs, LKML, linux-ppp, netdev, paulus On Thu, May 3, 2018 at 7:45 AM, Tetsuo Handa <penguin-kernel@i-love.sakura.ne.jp> wrote: > I'm not sure whether this is a PPP bug. > > As of uptime = 484, RCU says that it stalled for 125 seconds. > > ---------- > [ 484.407032] INFO: rcu_sched self-detected stall on CPU > [ 484.412488] 0-...!: (125000 ticks this GP) idle=f3e/1/4611686018427387906 softirq=112858/112858 fqs=0 > [ 484.422300] (t=125000 jiffies g=61626 c=61625 q=1534) > [ 484.427663] rcu_sched kthread starved for 125000 jiffies! g61626 c61625 f0x0 RCU_GP_WAIT_FQS(3) ->state=0x402 ->cpu=0 > ---------- > > 484 - 125 = 359, which was about to start SND related fuzzing in that log. > > ---------- > 2033/05/18 03:36:31 executing program 1: > r0 = socket(0x40000a, 0x5, 0x7) > setsockopt$inet_int(r0, 0x0, 0x18, &(0x7f0000000000)=0x200, 0x4) > bind$inet6(r0, &(0x7f00000000c0)={0xa, 0x0, 0x0, @loopback={0x0, 0x1}}, 0x1c) > perf_event_open(&(0x7f0000000040)={0x2, 0x70, 0x3e5}, 0x0, 0xffffffffffffffff, 0xffffffffffffffff, 0x0) > timer_create(0x0, &(0x7f00000001c0)={0x0, 0x15, 0x0, @thr={&(0x7f0000000440), &(0x7f0000000540)}}, &(0x7f0000000200)) > timer_getoverrun(0x0) > perf_event_open(&(0x7f000025c000)={0x2, 0x78, 0x3e3}, 0x0, 0x0, 0xffffffffffffffff, 0x0) > r1 = syz_open_dev$sndctrl(&(0x7f0000000200)='/dev/snd/controlC#\x00', 0x2, 0x0) > perf_event_open(&(0x7f0000001000)={0x0, 0x70, 0x0, 0x0, 0x0, 0x0, 0x0, 0x8ce, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x7, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0xfffffffffffffff8, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, @perf_bp={&(0x7f0000005000), 0x2}, 0x1000000000c}, 0x0, 0x0, 0xffffffffffffffff, 0x0) > ioctl$SNDRV_CTL_IOCTL_SUBSCRIBE_EVENTS(r1, 0xc0045516, &(0x7f00000000c0)=0x1) > r2 = syz_open_dev$sndpcmp(&(0x7f0000000100)='/dev/snd/pcmC#D#p\x00', 0x1, 0x4000) > ioctl$SNDRV_SEQ_IOCTL_GET_QUEUE_CLIENT(r2, 0xc04c5349, &(0x7f0000000240)={0x200, 0xfffffffffffffcdc, 0x1}) > syz_open_dev$tun(&(0x7f00000003c0)='/dev/net/tun\x00', 0x0, 0x20402) > ioctl$SNDRV_CTL_IOCTL_PVERSION(r1, 0xc1105517, &(0x7f0000001000)=""/250) > ioctl$SNDRV_CTL_IOCTL_SUBSCRIBE_EVENTS(r1, 0xc0045516, &(0x7f0000000000)) > > 2033/05/18 03:36:31 executing program 4: > syz_emit_ethernet(0x3e, &(0x7f00000000c0)={@broadcast=[0xff, 0xff, 0xff, 0xff, 0xff, 0xff], @empty=[0x0, 0x0, 0xb00000000000000], [], {@ipv4={0x800, {{0x5, 0x4, 0x0, 0x0, 0x30, 0x0, 0x0, 0x0, 0x1, 0x0, @remote={0xac, 0x14, 0x14, 0xbb}, @dev={0xac, 0x14, 0x14}}, @icmp=@parameter_prob={0x5, 0x4, 0x0, 0x0, 0x0, 0x0, {0x5, 0x4, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, @local={0xac, 0x223, 0x14, 0xaa}, @dev={0xac, 0x14, 0x14}}}}}}}, &(0x7f0000000000)={0x0, 0x2, [0x0, 0x2e6]}) > > 2033/05/18 03:36:31 executing program 1: > r0 = socket$pppoe(0x18, 0x1, 0x0) > connect$pppoe(r0, &(0x7f00000000c0)={0x18, 0x0, {0x1, @broadcast=[0xff, 0xff, 0xff, 0xff, 0xff, 0xff], 'ip6_vti0\x00'}}, 0x1e) > r1 = socket(0x3, 0xb, 0x80000001) > setsockopt$inet_sctp6_SCTP_ADAPTATION_LAYER(r1, 0x84, 0x7, &(0x7f0000000100)={0x2}, 0x4) > ioctl$sock_inet_SIOCGIFADDR(r0, 0x8915, &(0x7f0000000040)={'veth1_to_bridge\x00', {0x2, 0x4e21}}) > r2 = syz_open_dev$admmidi(&(0x7f0000000000)='/dev/admmidi#\x00', 0x6, 0x8000) > setsockopt$SO_VM_SOCKETS_BUFFER_MAX_SIZE(r2, 0x28, 0x2, &(0x7f0000000080)=0xffffffffffffff00, 0x8) > > [ 359.306427] snd_virmidi snd_virmidi.0: control 112:0:0:� :0 is already present > ---------- It's the next one that caused the hang (the number in "Comm: syz-executor1" matches with the number in "executing program 1"): [ 359.306427] snd_virmidi snd_virmidi.0: control 112:0:0:� :0 is already present 2033/05/18 03:36:31 executing program 1: r0 = openat$ptmx(0xffffffffffffff9c, &(0x7f0000000140)='/dev/ptmx\x00', 0x0, 0x0) ioctl$TCSETS(r0, 0x40045431, &(0x7f00005befdc)) r1 = syz_open_pts(r0, 0x20201) fcntl$setstatus(r1, 0x4, 0x2800) ioctl$TCXONC(r1, 0x540a, 0x0) perf_event_open(&(0x7f000025c000)={0x2, 0x70, 0x3e5, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, @perf_bp={&(0x7f000031f000)}}, 0x0, 0x0, 0xffffffffffffffff, 0x0) write(r1, &(0x7f0000fd6000)='z', 0x1) r2 = openat$ipvs(0xffffffffffffff9c, &(0x7f0000000000)='/proc/sys/net/ipv4/vs/sync_ports\x00', 0x2, 0x0) ioctl$ifreq_SIOCGIFINDEX_team(0xffffffffffffff9c, 0x8933, &(0x7f00000012c0)={'team0\x00', <r3=>0x0}) bind$packet(r2, &(0x7f0000001300)={0x11, 0x1f, r3, 0x1, 0x0, 0x6, @random="31e8917e98e6"}, 0x14) ioctl$TIOCSETD(r1, 0x5423, &(0x7f00000000c0)=0x3) ioctl$TCFLSH(r0, 0x540b, 0x0) close(r0) ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: INFO: rcu detected stall in __schedule @ 2018-05-03 6:07 ` Dmitry Vyukov 0 siblings, 0 replies; 6+ messages in thread From: Dmitry Vyukov @ 2018-05-03 6:07 UTC (permalink / raw) To: Tetsuo Handa; +Cc: syzbot, syzkaller-bugs, LKML, linux-ppp, netdev, paulus On Thu, May 3, 2018 at 7:45 AM, Tetsuo Handa <penguin-kernel@i-love.sakura.ne.jp> wrote: > I'm not sure whether this is a PPP bug. > > As of uptime = 484, RCU says that it stalled for 125 seconds. > > ---------- > [ 484.407032] INFO: rcu_sched self-detected stall on CPU > [ 484.412488] 0-...!: (125000 ticks this GP) idle=f3e/1/4611686018427387906 softirq=112858/112858 fqs=0 > [ 484.422300] (t=125000 jiffies g=61626 c=61625 q=1534) > [ 484.427663] rcu_sched kthread starved for 125000 jiffies! g61626 c61625 f0x0 RCU_GP_WAIT_FQS(3) ->state=0x402 ->cpu=0 > ---------- > > 484 - 125 = 359, which was about to start SND related fuzzing in that log. > > ---------- > 2033/05/18 03:36:31 executing program 1: > r0 = socket(0x40000a, 0x5, 0x7) > setsockopt$inet_int(r0, 0x0, 0x18, &(0x7f0000000000)=0x200, 0x4) > bind$inet6(r0, &(0x7f00000000c0)={0xa, 0x0, 0x0, @loopback={0x0, 0x1}}, 0x1c) > perf_event_open(&(0x7f0000000040)={0x2, 0x70, 0x3e5}, 0x0, 0xffffffffffffffff, 0xffffffffffffffff, 0x0) > timer_create(0x0, &(0x7f00000001c0)={0x0, 0x15, 0x0, @thr={&(0x7f0000000440), &(0x7f0000000540)}}, &(0x7f0000000200)) > timer_getoverrun(0x0) > perf_event_open(&(0x7f000025c000)={0x2, 0x78, 0x3e3}, 0x0, 0x0, 0xffffffffffffffff, 0x0) > r1 = syz_open_dev$sndctrl(&(0x7f0000000200)='/dev/snd/controlC#\x00', 0x2, 0x0) > perf_event_open(&(0x7f0000001000)={0x0, 0x70, 0x0, 0x0, 0x0, 0x0, 0x0, 0x8ce, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x7, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0xfffffffffffffff8, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, @perf_bp={&(0x7f0000005000), 0x2}, 0x1000000000c}, 0x0, 0x0, 0xffffffffffffffff, 0x0) > ioctl$SNDRV_CTL_IOCTL_SUBSCRIBE_EVENTS(r1, 0xc0045516, &(0x7f00000000c0)=0x1) > r2 = syz_open_dev$sndpcmp(&(0x7f0000000100)='/dev/snd/pcmC#D#p\x00', 0x1, 0x4000) > ioctl$SNDRV_SEQ_IOCTL_GET_QUEUE_CLIENT(r2, 0xc04c5349, &(0x7f0000000240)={0x200, 0xfffffffffffffcdc, 0x1}) > syz_open_dev$tun(&(0x7f00000003c0)='/dev/net/tun\x00', 0x0, 0x20402) > ioctl$SNDRV_CTL_IOCTL_PVERSION(r1, 0xc1105517, &(0x7f0000001000)=""/250) > ioctl$SNDRV_CTL_IOCTL_SUBSCRIBE_EVENTS(r1, 0xc0045516, &(0x7f0000000000)) > > 2033/05/18 03:36:31 executing program 4: > syz_emit_ethernet(0x3e, &(0x7f00000000c0)={@broadcast=[0xff, 0xff, 0xff, 0xff, 0xff, 0xff], @empty=[0x0, 0x0, 0xb00000000000000], [], {@ipv4={0x800, {{0x5, 0x4, 0x0, 0x0, 0x30, 0x0, 0x0, 0x0, 0x1, 0x0, @remote={0xac, 0x14, 0x14, 0xbb}, @dev={0xac, 0x14, 0x14}}, @icmp=@parameter_prob={0x5, 0x4, 0x0, 0x0, 0x0, 0x0, {0x5, 0x4, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, @local={0xac, 0x223, 0x14, 0xaa}, @dev={0xac, 0x14, 0x14}}}}}}}, &(0x7f0000000000)={0x0, 0x2, [0x0, 0x2e6]}) > > 2033/05/18 03:36:31 executing program 1: > r0 = socket$pppoe(0x18, 0x1, 0x0) > connect$pppoe(r0, &(0x7f00000000c0)={0x18, 0x0, {0x1, @broadcast=[0xff, 0xff, 0xff, 0xff, 0xff, 0xff], 'ip6_vti0\x00'}}, 0x1e) > r1 = socket(0x3, 0xb, 0x80000001) > setsockopt$inet_sctp6_SCTP_ADAPTATION_LAYER(r1, 0x84, 0x7, &(0x7f0000000100)={0x2}, 0x4) > ioctl$sock_inet_SIOCGIFADDR(r0, 0x8915, &(0x7f0000000040)={'veth1_to_bridge\x00', {0x2, 0x4e21}}) > r2 = syz_open_dev$admmidi(&(0x7f0000000000)='/dev/admmidi#\x00', 0x6, 0x8000) > setsockopt$SO_VM_SOCKETS_BUFFER_MAX_SIZE(r2, 0x28, 0x2, &(0x7f0000000080)=0xffffffffffffff00, 0x8) > > [ 359.306427] snd_virmidi snd_virmidi.0: control 112:0:0:� :0 is already present > ---------- It's the next one that caused the hang (the number in "Comm: syz-executor1" matches with the number in "executing program 1"): [ 359.306427] snd_virmidi snd_virmidi.0: control 112:0:0:� :0 is already present 2033/05/18 03:36:31 executing program 1: r0 = openat$ptmx(0xffffffffffffff9c, &(0x7f0000000140)='/dev/ptmx\x00', 0x0, 0x0) ioctl$TCSETS(r0, 0x40045431, &(0x7f00005befdc)) r1 = syz_open_pts(r0, 0x20201) fcntl$setstatus(r1, 0x4, 0x2800) ioctl$TCXONC(r1, 0x540a, 0x0) perf_event_open(&(0x7f000025c000)={0x2, 0x70, 0x3e5, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, @perf_bp={&(0x7f000031f000)}}, 0x0, 0x0, 0xffffffffffffffff, 0x0) write(r1, &(0x7f0000fd6000)='z', 0x1) r2 = openat$ipvs(0xffffffffffffff9c, &(0x7f0000000000)='/proc/sys/net/ipv4/vs/sync_ports\x00', 0x2, 0x0) ioctl$ifreq_SIOCGIFINDEX_team(0xffffffffffffff9c, 0x8933, &(0x7f00000012c0)={'team0\x00', <r3=>0x0}) bind$packet(r2, &(0x7f0000001300)={0x11, 0x1f, r3, 0x1, 0x0, 0x6, @random="31e8917e98e6"}, 0x14) ioctl$TIOCSETD(r1, 0x5423, &(0x7f00000000c0)=0x3) ioctl$TCFLSH(r0, 0x540b, 0x0) close(r0) ^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2018-05-03 6:07 UTC | newest] Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2018-05-02 17:56 INFO: rcu detected stall in __schedule syzbot 2018-05-02 17:56 ` syzbot 2018-05-03 5:45 ` Tetsuo Handa 2018-05-03 5:45 ` Tetsuo Handa 2018-05-03 6:07 ` Dmitry Vyukov 2018-05-03 6:07 ` Dmitry Vyukov
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.