All of lore.kernel.org
 help / color / mirror / Atom feed
* Re: possible deadlock in perf_event_ctx_lock_nested
       [not found] <001a11448f6c0346ec055c831a71@google.com>
@ 2017-10-27  8:31 ` Dmitry Vyukov
  2017-10-27 15:11 ` Peter Zijlstra
  1 sibling, 0 replies; 5+ messages in thread
From: Dmitry Vyukov @ 2017-10-27  8:31 UTC (permalink / raw)
  To: syzbot
  Cc: Arnaldo Carvalho de Melo, Alexander Shishkin, LKML, Ingo Molnar,
	Peter Zijlstra, syzkaller-bugs

On Fri, Oct 27, 2017 at 10:30 AM, syzbot
<bot+f67ad450a4bd1e42a7bd09f592904b15be39db7a@syzkaller.appspotmail.com>
wrote:
> Hello,
>
> syzkaller hit the following crash on
> a31cc455c512f3f1dd5f79cac8e29a7c8a617af8
> git://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git/master
> compiler: gcc (GCC) 7.1.1 20170620
> .config is attached
> Raw console output is attached.


Last happened on upstream 68ebe3cbe786a954481d6ded0dc83f7d551640bf (Oct 9).


> ======================================================
> WARNING: possible circular locking dependency detected
> 4.13.0-next-20170911+ #19 Not tainted
> ------------------------------------------------------
> syz-executor2/12380 is trying to acquire lock:
>  (&ctx->mutex){+.+.}, at: [<ffffffff8180923c>]
> perf_event_ctx_lock_nested+0x1dc/0x3c0 kernel/events/core.c:1210
>
> but task is already holding lock:
>  (&pipe->mutex/1){+.+.}, at: [<ffffffff81ac0fa6>] pipe_lock_nested
> fs/pipe.c:66 [inline]
>  (&pipe->mutex/1){+.+.}, at: [<ffffffff81ac0fa6>] pipe_lock+0x56/0x70
> fs/pipe.c:74
>
> which lock already depends on the new lock.
>
>
> the existing dependency chain (in reverse order) is:
>
> -> #8 (&pipe->mutex/1){+.+.}:
>        check_prevs_add kernel/locking/lockdep.c:2020 [inline]
>        validate_chain kernel/locking/lockdep.c:2469 [inline]
>        __lock_acquire+0x328f/0x4620 kernel/locking/lockdep.c:3498
>        lock_acquire+0x1d5/0x580 kernel/locking/lockdep.c:4002
>        __mutex_lock_common kernel/locking/mutex.c:756 [inline]
>        __mutex_lock+0x16f/0x1870 kernel/locking/mutex.c:893
>        mutex_lock_nested+0x16/0x20 kernel/locking/mutex.c:908
>        pipe_lock_nested fs/pipe.c:66 [inline]
>        pipe_lock+0x56/0x70 fs/pipe.c:74
>        iter_file_splice_write+0x264/0xf50 fs/splice.c:699
>        do_splice_from fs/splice.c:851 [inline]
>        do_splice fs/splice.c:1147 [inline]
>        SYSC_splice fs/splice.c:1402 [inline]
>        SyS_splice+0x7d5/0x1630 fs/splice.c:1382
>        entry_SYSCALL_64_fastpath+0x1f/0xbe
>
> -> #7 (sb_writers){.+.+}:
>        percpu_down_read_preempt_disable include/linux/percpu-rwsem.h:35
> [inline]
>        percpu_down_read include/linux/percpu-rwsem.h:58 [inline]
>        __sb_start_write+0x18f/0x290 fs/super.c:1341
>        sb_start_write include/linux/fs.h:1541 [inline]
>        mnt_want_write+0x3f/0xb0 fs/namespace.c:387
>        filename_create+0x12b/0x520 fs/namei.c:3628
>        kern_path_create+0x33/0x40 fs/namei.c:3674
>        handle_create+0xc0/0x760 drivers/base/devtmpfs.c:203
>
> -> #6 ((complete)&req.done){+.+.}:
>        check_prevs_add kernel/locking/lockdep.c:2020 [inline]
>        validate_chain kernel/locking/lockdep.c:2469 [inline]
>        __lock_acquire+0x328f/0x4620 kernel/locking/lockdep.c:3498
>        lock_acquire+0x1d5/0x580 kernel/locking/lockdep.c:4002
>        complete_acquire include/linux/completion.h:39 [inline]
>        __wait_for_common kernel/sched/completion.c:108 [inline]
>        wait_for_common kernel/sched/completion.c:122 [inline]
>        wait_for_completion+0xc8/0x770 kernel/sched/completion.c:143
>        devtmpfs_create_node+0x32b/0x4a0 drivers/base/devtmpfs.c:115
>        device_add+0x120f/0x1640 drivers/base/core.c:1824
>        device_create_groups_vargs+0x1f3/0x250 drivers/base/core.c:2430
>        device_create_vargs drivers/base/core.c:2470 [inline]
>        device_create+0xda/0x110 drivers/base/core.c:2506
>        msr_device_create+0x26/0x40 arch/x86/kernel/msr.c:188
>        cpuhp_invoke_callback+0x256/0x14d0 kernel/cpu.c:145
>        cpuhp_thread_fun+0x265/0x520 kernel/cpu.c:434
>        smpboot_thread_fn+0x489/0x850 kernel/smpboot.c:164
>        kthread+0x39c/0x470 kernel/kthread.c:231
>        ret_from_fork+0x2a/0x40 arch/x86/entry/entry_64.S:431
>
> -> #5 (cpuhp_state){+.+.}:
>        check_prevs_add kernel/locking/lockdep.c:2020 [inline]
>        validate_chain kernel/locking/lockdep.c:2469 [inline]
>        __lock_acquire+0x328f/0x4620 kernel/locking/lockdep.c:3498
>        lock_acquire+0x1d5/0x580 kernel/locking/lockdep.c:4002
>        cpuhp_invoke_ap_callback kernel/cpu.c:467 [inline]
>        cpuhp_issue_call+0x1a2/0x3e0 kernel/cpu.c:1308
>        __cpuhp_setup_state_cpuslocked+0x2d6/0x5f0 kernel/cpu.c:1455
>        __cpuhp_setup_state+0xb0/0x140 kernel/cpu.c:1484
>        cpuhp_setup_state include/linux/cpuhotplug.h:177 [inline]
>        page_writeback_init+0x4d/0x71 mm/page-writeback.c:2082
>        pagecache_init+0x48/0x4f mm/filemap.c:871
>        start_kernel+0x6c1/0x754 init/main.c:690
>        x86_64_start_reservations+0x2a/0x2c arch/x86/kernel/head64.c:377
>        x86_64_start_kernel+0x87/0x8a arch/x86/kernel/head64.c:358
>        verify_cpu+0x0/0xfb
>
> -> #4 (cpuhp_state_mutex){+.+.}:
>        check_prevs_add kernel/locking/lockdep.c:2020 [inline]
>        validate_chain kernel/locking/lockdep.c:2469 [inline]
>        __lock_acquire+0x328f/0x4620 kernel/locking/lockdep.c:3498
>        lock_acquire+0x1d5/0x580 kernel/locking/lockdep.c:4002
>        __mutex_lock_common kernel/locking/mutex.c:756 [inline]
>        __mutex_lock+0x16f/0x1870 kernel/locking/mutex.c:893
>        mutex_lock_nested+0x16/0x20 kernel/locking/mutex.c:908
>        __cpuhp_setup_state_cpuslocked+0x5b/0x5f0 kernel/cpu.c:1430
>        __cpuhp_setup_state+0xb0/0x140 kernel/cpu.c:1484
>        cpuhp_setup_state_nocalls include/linux/cpuhotplug.h:205 [inline]
>        kvm_guest_init+0x1f3/0x20f arch/x86/kernel/kvm.c:488
>        setup_arch+0x1899/0x1ab3 arch/x86/kernel/setup.c:1295
>        start_kernel+0xa5/0x754 init/main.c:530
>        x86_64_start_reservations+0x2a/0x2c arch/x86/kernel/head64.c:377
>        x86_64_start_kernel+0x87/0x8a arch/x86/kernel/head64.c:358
>        verify_cpu+0x0/0xfb
>
> -> #3 (cpu_hotplug_lock.rw_sem){++++}:
>        check_prevs_add kernel/locking/lockdep.c:2020 [inline]
>        validate_chain kernel/locking/lockdep.c:2469 [inline]
>        __lock_acquire+0x328f/0x4620 kernel/locking/lockdep.c:3498
>        lock_acquire+0x1d5/0x580 kernel/locking/lockdep.c:4002
>        percpu_down_read_preempt_disable include/linux/percpu-rwsem.h:35
> [inline]
>        percpu_down_read include/linux/percpu-rwsem.h:58 [inline]
>        cpus_read_lock+0x42/0x90 kernel/cpu.c:218
>        static_key_slow_inc+0x9d/0x3c0 kernel/jump_label.c:123
>        tracepoint_add_func kernel/tracepoint.c:223 [inline]
>        tracepoint_probe_register_prio+0x80d/0x9a0 kernel/tracepoint.c:283
>        tracepoint_probe_register+0x2a/0x40 kernel/tracepoint.c:304
>        trace_event_reg+0x167/0x320 kernel/trace/trace_events.c:305
>        perf_trace_event_reg kernel/trace/trace_event_perf.c:122 [inline]
>        perf_trace_event_init kernel/trace/trace_event_perf.c:197 [inline]
>        perf_trace_init+0x4f9/0xab0 kernel/trace/trace_event_perf.c:221
>        perf_tp_event_init+0x7d/0xf0 kernel/events/core.c:8037
>        perf_try_init_event+0xc9/0x1f0 kernel/events/core.c:9261
>        perf_init_event kernel/events/core.c:9299 [inline]
>        perf_event_alloc+0x1c5b/0x2a00 kernel/events/core.c:9558
>        SYSC_perf_event_open+0x84e/0x2e00 kernel/events/core.c:10013
>        SyS_perf_event_open+0x39/0x50 kernel/events/core.c:9899
>        entry_SYSCALL_64_fastpath+0x1f/0xbe
>
> -> #2 (tracepoints_mutex){+.+.}:
>        check_prevs_add kernel/locking/lockdep.c:2020 [inline]
>        validate_chain kernel/locking/lockdep.c:2469 [inline]
>        __lock_acquire+0x328f/0x4620 kernel/locking/lockdep.c:3498
>        lock_acquire+0x1d5/0x580 kernel/locking/lockdep.c:4002
>        __mutex_lock_common kernel/locking/mutex.c:756 [inline]
>        __mutex_lock+0x16f/0x1870 kernel/locking/mutex.c:893
>        mutex_lock_nested+0x16/0x20 kernel/locking/mutex.c:908
>        tracepoint_probe_register_prio+0xa0/0x9a0 kernel/tracepoint.c:279
>        tracepoint_probe_register+0x2a/0x40 kernel/tracepoint.c:304
>        trace_event_reg+0x167/0x320 kernel/trace/trace_events.c:305
>        perf_trace_event_reg kernel/trace/trace_event_perf.c:122 [inline]
>        perf_trace_event_init kernel/trace/trace_event_perf.c:197 [inline]
>        perf_trace_init+0x4f9/0xab0 kernel/trace/trace_event_perf.c:221
>        perf_tp_event_init+0x7d/0xf0 kernel/events/core.c:8037
>        perf_try_init_event+0xc9/0x1f0 kernel/events/core.c:9261
>        perf_init_event kernel/events/core.c:9299 [inline]
>        perf_event_alloc+0x1c5b/0x2a00 kernel/events/core.c:9558
>        SYSC_perf_event_open+0x84e/0x2e00 kernel/events/core.c:10013
>        SyS_perf_event_open+0x39/0x50 kernel/events/core.c:9899
>        entry_SYSCALL_64_fastpath+0x1f/0xbe
>
> -> #1 (event_mutex){+.+.}:
>        check_prevs_add kernel/locking/lockdep.c:2020 [inline]
>        validate_chain kernel/locking/lockdep.c:2469 [inline]
>        __lock_acquire+0x328f/0x4620 kernel/locking/lockdep.c:3498
>        lock_acquire+0x1d5/0x580 kernel/locking/lockdep.c:4002
>        __mutex_lock_common kernel/locking/mutex.c:756 [inline]
>        __mutex_lock+0x16f/0x1870 kernel/locking/mutex.c:893
>        mutex_lock_nested+0x16/0x20 kernel/locking/mutex.c:908
>        ftrace_profile_set_filter+0x7a/0x270
> kernel/trace/trace_events_filter.c:2266
>        perf_event_set_filter kernel/events/core.c:8622 [inline]
>        _perf_ioctl kernel/events/core.c:4785 [inline]
>        perf_ioctl+0x1026/0x1360 kernel/events/core.c:4822
>        vfs_ioctl fs/ioctl.c:45 [inline]
>        do_vfs_ioctl+0x1b1/0x1530 fs/ioctl.c:685
>        SYSC_ioctl fs/ioctl.c:700 [inline]
>        SyS_ioctl+0x8f/0xc0 fs/ioctl.c:691
>        entry_SYSCALL_64_fastpath+0x1f/0xbe
>
> -> #0 (&ctx->mutex){+.+.}:
>        check_prev_add+0x865/0x1520 kernel/locking/lockdep.c:1894
>        check_prevs_add kernel/locking/lockdep.c:2020 [inline]
>        validate_chain kernel/locking/lockdep.c:2469 [inline]
>        __lock_acquire+0x328f/0x4620 kernel/locking/lockdep.c:3498
>        lock_acquire+0x1d5/0x580 kernel/locking/lockdep.c:4002
>        __mutex_lock_common kernel/locking/mutex.c:756 [inline]
>        __mutex_lock+0x16f/0x1870 kernel/locking/mutex.c:893
>        mutex_lock_nested+0x16/0x20 kernel/locking/mutex.c:908
>        perf_event_ctx_lock_nested+0x1dc/0x3c0 kernel/events/core.c:1210
>        perf_event_ctx_lock kernel/events/core.c:1223 [inline]
>        perf_read+0xb9/0x970 kernel/events/core.c:4584
>        do_loop_readv_writev fs/read_write.c:693 [inline]
>        do_iter_read+0x3db/0x5b0 fs/read_write.c:917
>        vfs_readv+0x121/0x1c0 fs/read_write.c:979
>        kernel_readv fs/splice.c:361 [inline]
>        default_file_splice_read+0x508/0xae0 fs/splice.c:416
>        do_splice_to+0x110/0x170 fs/splice.c:880
>        do_splice fs/splice.c:1173 [inline]
>        SYSC_splice fs/splice.c:1402 [inline]
>        SyS_splice+0x11a8/0x1630 fs/splice.c:1382
>        entry_SYSCALL_64_fastpath+0x1f/0xbe
>
> other info that might help us debug this:
>
> Chain exists of:
>   &ctx->mutex --> sb_writers --> &pipe->mutex/1
>
>  Possible unsafe locking scenario:
>
>        CPU0                    CPU1
>        ----                    ----
>   lock(&pipe->mutex/1);
>                                lock(sb_writers);
>                                lock(&pipe->mutex/1);
>   lock(&ctx->mutex);
>
>  *** DEADLOCK ***
>
> 1 lock held by syz-executor2/12380:
>  #0:  (&pipe->mutex/1){+.+.}, at: [<ffffffff81ac0fa6>] pipe_lock_nested
> fs/pipe.c:66 [inline]
>  #0:  (&pipe->mutex/1){+.+.}, at: [<ffffffff81ac0fa6>] pipe_lock+0x56/0x70
> fs/pipe.c:74
>
> stack backtrace:
> CPU: 1 PID: 12380 Comm: syz-executor2 Not tainted 4.13.0-next-20170911+ #19
> Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS
> Google 01/01/2011
> Call Trace:
>  __dump_stack lib/dump_stack.c:16 [inline]
>  dump_stack+0x194/0x257 lib/dump_stack.c:52
>  print_circular_bug+0x503/0x710 kernel/locking/lockdep.c:1259
>  check_prev_add+0x865/0x1520 kernel/locking/lockdep.c:1894
>  check_prevs_add kernel/locking/lockdep.c:2020 [inline]
>  validate_chain kernel/locking/lockdep.c:2469 [inline]
>  __lock_acquire+0x328f/0x4620 kernel/locking/lockdep.c:3498
>  lock_acquire+0x1d5/0x580 kernel/locking/lockdep.c:4002
>  __mutex_lock_common kernel/locking/mutex.c:756 [inline]
>  __mutex_lock+0x16f/0x1870 kernel/locking/mutex.c:893
>  mutex_lock_nested+0x16/0x20 kernel/locking/mutex.c:908
>  perf_event_ctx_lock_nested+0x1dc/0x3c0 kernel/events/core.c:1210
>  perf_event_ctx_lock kernel/events/core.c:1223 [inline]
>  perf_read+0xb9/0x970 kernel/events/core.c:4584
>  do_loop_readv_writev fs/read_write.c:693 [inline]
>  do_iter_read+0x3db/0x5b0 fs/read_write.c:917
>  vfs_readv+0x121/0x1c0 fs/read_write.c:979
>  kernel_readv fs/splice.c:361 [inline]
>  default_file_splice_read+0x508/0xae0 fs/splice.c:416
>  do_splice_to+0x110/0x170 fs/splice.c:880
>  do_splice fs/splice.c:1173 [inline]
>  SYSC_splice fs/splice.c:1402 [inline]
>  SyS_splice+0x11a8/0x1630 fs/splice.c:1382
>  entry_SYSCALL_64_fastpath+0x1f/0xbe
> RIP: 0033:0x451e59
> RSP: 002b:00007ff32dce9c08 EFLAGS: 00000216 ORIG_RAX: 0000000000000113
> RAX: ffffffffffffffda RBX: 0000000000718000 RCX: 0000000000451e59
> RDX: 000000000000001b RSI: 0000000000000000 RDI: 000000000000001c
> RBP: 0000000000000082 R08: 0000000200000005 R09: 0000000000000009
> R10: 0000000000000000 R11: 0000000000000216 R12: 0000000000000000
> R13: 0000000000a6f7ef R14: 00007ff32dcea9c0 R15: 0000000000000000
> sctp: [Deprecated]: syz-executor1 (pid 12436) Use of struct sctp_assoc_value
> in delayed_ack socket option.
> Use struct sctp_sack_info instead
> sctp: [Deprecated]: syz-executor1 (pid 12439) Use of struct sctp_assoc_value
> in delayed_ack socket option.
> Use struct sctp_sack_info instead
> loop_reread_partitions: partition scan of loop0 (- \�  t�@�� r�9h �x
> G�Q:[��i�l  �
> �L�*� �@� ���R�-�T�r-�x�� ) failed (rc=-13)
> QAT: Invalid ioctl
> QAT: Invalid ioctl
> QAT: Invalid ioctl
> QAT: Invalid ioctl
> sctp: [Deprecated]: syz-executor0 (pid 12525) Use of struct sctp_assoc_value
> in delayed_ack socket option.
> Use struct sctp_sack_info instead
> sctp: [Deprecated]: syz-executor0 (pid 12525) Use of struct sctp_assoc_value
> in delayed_ack socket option.
> Use struct sctp_sack_info instead
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> xs_tcp_setup_socket: connect returned unhandled error -113
> IPv6: ADDRCONF(NETDEV_CHANGE): lo: link becomes ready
> kvm: pic: non byte write
> kvm: pic: single mode not supported
> QAT: Invalid ioctl
> QAT: Invalid ioctl
> pit: kvm: requested 2514 ns i8254 timer period limited to 500000 ns
> sctp: [Deprecated]: syz-executor2 (pid 12946) Use of int in maxseg socket
> option.
> Use struct sctp_assoc_value instead
> sctp: [Deprecated]: syz-executor2 (pid 12946) Use of int in maxseg socket
> option.
> Use struct sctp_assoc_value instead
> nla_parse: 6 callbacks suppressed
> netlink: 5 bytes leftover after parsing attributes in process
> `syz-executor5'.
> netlink: 5 bytes leftover after parsing attributes in process
> `syz-executor5'.
> pit: kvm: requested 2514 ns i8254 timer period limited to 500000 ns
> pit: kvm: requested 2514 ns i8254 timer period limited to 500000 ns
> kauditd_printk_skb: 65 callbacks suppressed
> audit: type=1326 audit(1505176342.033:940): auid=4294967295 uid=0 gid=0
> ses=4294967295 subj=kernel pid=13113 comm="" exe="/root/syz-executor1" sig=0
> arch=c000003e syscall=202 compat=0 ip=0x451e59 code=0x50000
> audit: type=1326 audit(1505176342.059:941): auid=4294967295 uid=0 gid=0
> ses=4294967295 subj=kernel pid=13113 comm="" exe="/root/syz-executor1" sig=0
> arch=c000003e syscall=2 compat=0 ip=0x40c3e1 code=0x50000
> audit: type=1326 audit(1505176342.059:942): auid=4294967295 uid=0 gid=0
> ses=4294967295 subj=kernel pid=13113 comm="" exe="/root/syz-executor1" sig=0
> arch=c000003e syscall=202 compat=0 ip=0x451e59 code=0x50000
> audit: type=1326 audit(1505176342.059:943): auid=4294967295 uid=0 gid=0
> ses=4294967295 subj=kernel pid=13113 comm="" exe="/root/syz-executor1" sig=0
> arch=c000003e syscall=202 compat=0 ip=0x451e59 code=0x50000
> audit: type=1326 audit(1505176342.060:944): auid=4294967295 uid=0 gid=0
> ses=4294967295 subj=kernel pid=13113 comm="" exe="/root/syz-executor1" sig=0
> arch=c000003e syscall=202 compat=0 ip=0x451e59 code=0x50000
> audit: type=1326 audit(1505176342.060:945): auid=4294967295 uid=0 gid=0
> ses=4294967295 subj=kernel pid=13113 comm="" exe="/root/syz-executor1" sig=0
> arch=c000003e syscall=202 compat=0 ip=0x451e59 code=0x50000
> audit: type=1326 audit(1505176342.060:946): auid=4294967295 uid=0 gid=0
> ses=4294967295 subj=kernel pid=13113 comm="" exe="/root/syz-executor1" sig=0
> arch=c000003e syscall=202 compat=0 ip=0x451e59 code=0x50000
> audit: type=1326 audit(1505176342.060:947): auid=4294967295 uid=0 gid=0
> ses=4294967295 subj=kernel pid=13113 comm="" exe="/root/syz-executor1" sig=0
> arch=c000003e syscall=202 compat=0 ip=0x451e59 code=0x50000
> audit: type=1326 audit(1505176342.060:948): auid=4294967295 uid=0 gid=0
> ses=4294967295 subj=kernel pid=13113 comm="" exe="/root/syz-executor1" sig=0
> arch=c000003e syscall=202 compat=0 ip=0x451e59 code=0x50000
> audit: type=1326 audit(1505176342.061:949): auid=4294967295 uid=0 gid=0
> ses=4294967295 subj=kernel pid=13113 comm="" exe="/root/syz-executor1" sig=0
> arch=c000003e syscall=202 compat=0 ip=0x451e59 code=0x50000
> pit: kvm: requested 2514 ns i8254 timer period limited to 500000 ns
> pit: kvm: requested 2514 ns i8254 timer period limited to 500000 ns
> pit: kvm: requested 2514 ns i8254 timer period limited to 500000 ns
> pit: kvm: requested 2514 ns i8254 timer period limited to 500000 ns
> pit: kvm: requested 2514 ns i8254 timer period limited to 500000 ns
> pit: kvm: requested 2514 ns i8254 timer period limited to 500000 ns
> QAT: Invalid ioctl
> device syz5 entered promiscuous mode
> QAT: Invalid ioctl
> device syz5 left promiscuous mode
> device syz5 entered promiscuous mode
> pit: kvm: requested 2514 ns i8254 timer period limited to 500000 ns
> do_dccp_getsockopt: sockopt(PACKET_SIZE) is deprecated: fix your app
> do_dccp_getsockopt: sockopt(PACKET_SIZE) is deprecated: fix your app
> device syz2 entered promiscuous mode
> device syz2 left promiscuous mode
> device syz2 entered promiscuous mode
> netlink: 5 bytes leftover after parsing attributes in process
> `syz-executor2'.
> SELinux: unrecognized netlink message: protocol=0 nlmsg_type=0
> sclass=netlink_route_socket pig=13714 comm=syz-executor2
> FAULT_INJECTION: forcing a failure.
> name failslab, interval 1, probability 0, space 0, times 1
> sctp: [Deprecated]: syz-executor1 (pid 13806) Use of int in max_burst socket
> option.
> Use struct sctp_assoc_value instead
> CPU: 0 PID: 13805 Comm: syz-executor6 Not tainted 4.13.0-next-20170911+ #19
> Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS
> Google 01/01/2011
> Call Trace:
>  __dump_stack lib/dump_stack.c:16 [inline]
>  dump_stack+0x194/0x257 lib/dump_stack.c:52
>  fail_dump lib/fault-inject.c:51 [inline]
>  should_fail+0x8c0/0xa40 lib/fault-inject.c:149
>  should_failslab+0xec/0x120 mm/failslab.c:31
>  slab_pre_alloc_hook mm/slab.h:422 [inline]
>  slab_alloc mm/slab.c:3383 [inline]
>  __do_kmalloc mm/slab.c:3723 [inline]
>  __kmalloc_track_caller+0x5f/0x760 mm/slab.c:3740
>  memdup_user+0x2c/0x90 mm/util.c:164
>  map_lookup_elem+0x21c/0xb30 kernel/bpf/syscall.c:461
>  SYSC_bpf kernel/bpf/syscall.c:1473 [inline]
>  SyS_bpf+0x1070/0x46a0 kernel/bpf/syscall.c:1451
>  entry_SYSCALL_64_fastpath+0x1f/0xbe
> RIP: 0033:0x451e59
> RSP: 002b:00007f75f19e1c08 EFLAGS: 00000216 ORIG_RAX: 0000000000000141
> RAX: ffffffffffffffda RBX: 0000000000718000 RCX: 0000000000451e59
> RDX: 0000000000000018 RSI: 00000000204a2fe8 RDI: 0000000000000001
> RBP: 00007f75f19e1a10 R08: 0000000000000000 R09: 0000000000000000
> R10: 0000000000000000 R11: 0000000000000216 R12: 00000000004b69f7
> R13: 00007f75f19e1b48 R14: 00000000004b6a07 R15: 0000000000000000
> sctp: [Deprecated]: syz-executor1 (pid 13814) Use of int in max_burst socket
> option.
> Use struct sctp_assoc_value instead
> sctp: [Deprecated]: syz-executor1 (pid 13873) Use of int in max_burst socket
> option.
> Use struct sctp_assoc_value instead
> sctp: [Deprecated]: syz-executor1 (pid 13895) Use of int in max_burst socket
> option.
> Use struct sctp_assoc_value instead
> FAULT_INJECTION: forcing a failure.
> name failslab, interval 1, probability 0, space 0, times 0
> CPU: 1 PID: 14039 Comm: syz-executor3 Not tainted 4.13.0-next-20170911+ #19
> Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS
> Google 01/01/2011
> Call Trace:
>  __dump_stack lib/dump_stack.c:16 [inline]
>  dump_stack+0x194/0x257 lib/dump_stack.c:52
>  fail_dump lib/fault-inject.c:51 [inline]
>  should_fail+0x8c0/0xa40 lib/fault-inject.c:149
>  should_failslab+0xec/0x120 mm/failslab.c:31
>  slab_pre_alloc_hook mm/slab.h:422 [inline]
>  slab_alloc_node mm/slab.c:3304 [inline]
>  kmem_cache_alloc_node+0x56/0x760 mm/slab.c:3649
>  __alloc_skb+0xf1/0x740 net/core/skbuff.c:194
>  alloc_skb include/linux/skbuff.h:976 [inline]
>  tcp_xmit_probe_skb+0x67/0x360 net/ipv4/tcp_output.c:3599
>  tcp_send_window_probe+0x11d/0x150 net/ipv4/tcp_output.c:3621
>  do_tcp_setsockopt.isra.35+0xca7/0x1fd0 net/ipv4/tcp.c:2586
>  tcp_setsockopt+0xb0/0xd0 net/ipv4/tcp.c:2801
>  sock_common_setsockopt+0x95/0xd0 net/core/sock.c:2960
>  SYSC_setsockopt net/socket.c:1852 [inline]
>  SyS_setsockopt+0x189/0x360 net/socket.c:1831
>  entry_SYSCALL_64_fastpath+0x1f/0xbe
> RIP: 0033:0x451e59
> RSP: 002b:00007f1f8ae1bc08 EFLAGS: 00000216 ORIG_RAX: 0000000000000036
> RAX: ffffffffffffffda RBX: 0000000000718000 RCX: 0000000000451e59
> RDX: 0000000000000013 RSI: 0000000000000006 RDI: 0000000000000005
> RBP: 00007f1f8ae1ba10 R08: 0000000000000004 R09: 0000000000000000
> R10: 0000000020d15000 R11: 0000000000000216 R12: 00000000004b69f7
> R13: 00007f1f8ae1bb48 R14: 00000000004b6a07 R15: 0000000000000000
> sg_write: data in/out 3171656/194 bytes for SCSI command 0x2a-- guessing
> data in;
>    program syz-executor2 not setting count and/or reply_len properly
> sg_write: data in/out 3171656/194 bytes for SCSI command 0x2a-- guessing
> data in;
>    program syz-executor2 not setting count and/or reply_len properly
> create_pit_timer: 19 callbacks suppressed
> pit: kvm: requested 2514 ns i8254 timer period limited to 500000 ns
> pit: kvm: requested 2514 ns i8254 timer period limited to 500000 ns
>
>
> ---
> This bug is generated by a dumb bot. It may contain errors.
> See https://goo.gl/tpsmEJ for details.
> Direct all questions to syzkaller@googlegroups.com.
>
> syzbot will keep track of this bug report.
> Once a fix for this bug is committed, please reply to this email with:
> #syz fix: exact-commit-title
> To mark this as a duplicate of another syzbot report, please reply with:
> #syz dup: exact-subject-of-another-report
> If it's a one-off invalid bug report, please reply with:
> #syz invalid
> Note: if the crash happens again, it will cause creation of a new bug
> report.
>
> --
> You received this message because you are subscribed to the Google Groups
> "syzkaller-bugs" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to syzkaller-bugs+unsubscribe@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/syzkaller-bugs/001a11448f6c0346ec055c831a71%40google.com.
> For more options, visit https://groups.google.com/d/optout.

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: possible deadlock in perf_event_ctx_lock_nested
       [not found] <001a11448f6c0346ec055c831a71@google.com>
  2017-10-27  8:31 ` possible deadlock in perf_event_ctx_lock_nested Dmitry Vyukov
@ 2017-10-27 15:11 ` Peter Zijlstra
  2017-10-27 15:33   ` Peter Zijlstra
  1 sibling, 1 reply; 5+ messages in thread
From: Peter Zijlstra @ 2017-10-27 15:11 UTC (permalink / raw)
  To: syzbot
  Cc: acme, alexander.shishkin, linux-kernel, mingo, syzkaller-bugs,
	Thomas Gleixner

On Fri, Oct 27, 2017 at 01:30:30AM -0700, syzbot wrote:

> ======================================================
> WARNING: possible circular locking dependency detected
> 4.13.0-next-20170911+ #19 Not tainted
> ------------------------------------------------------
> syz-executor2/12380 is trying to acquire lock:
>  (&ctx->mutex){+.+.}, at: [<ffffffff8180923c>]
> perf_event_ctx_lock_nested+0x1dc/0x3c0 kernel/events/core.c:1210
> 
> but task is already holding lock:
>  (&pipe->mutex/1){+.+.}, at: [<ffffffff81ac0fa6>] pipe_lock_nested
> fs/pipe.c:66 [inline]
>  (&pipe->mutex/1){+.+.}, at: [<ffffffff81ac0fa6>] pipe_lock+0x56/0x70
> fs/pipe.c:74
> 
> which lock already depends on the new lock.


ARRGH!!

that translates like the below, which is an absolute maze and requires
at least 5 concurrent callstacks, possibly more.

We already had a lot of fun with hotplug-perf-ftrace, but the below
contains more. Let me try and page that previous crap back.



	perf_ioctl()
#0	  perf_event_ctx_lock()		[ctx->mutex]
	  perf_event_set_filter  
#1	    ftrace_profile_set_filter   [event_mutex]
		



	sys_perf_event_open
	  ...
	  perf_trace_init
#1	    mutex_lock [event_mutex]
	    trace_event_reg
	      tracepoint_probe_register
#2		mutex_lock() [tracepoints_mutex]
		tracepoint_add_func()
#3		  static_key_slow_inc()	[cpuhotplug_lock]
	




	cpuhp_setup_state_nocalls
#3	  cpus_read_lock	[cpuhotplug_lock]
	    __cpuhp_setup_state_cpuslocked
#4	      mutex_lock	[cpuhp_state_mutex]
	      cpuhp_issue_call
#5		cpuhp_invoke_ap_callback()	[cpuhp_state]


#5	cpuhp_invoke_callback [cpuhp_state]
	  ...
	    devtmpfs_create_node
#6	      wait_for_completion()	[&req.done]

						devtmpfsd
						  handle_create
#7						    filename_create [sb_writers]
#6						  complete [&req.done]
							





	do_splice
#7	  file_start_write() [sb_writers]
	  do_splice_from
	    iter_file_splice_write
#8	      pipe_lock [pipe->mutex]





	do_splice
#8	  pipe_lock	[pipe->mutex]
	  do_splice_to
	    ...
#0	      perf_read() [ctx->mutex]
		

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: possible deadlock in perf_event_ctx_lock_nested
  2017-10-27 15:11 ` Peter Zijlstra
@ 2017-10-27 15:33   ` Peter Zijlstra
  2018-02-12 15:04     ` Dmitry Vyukov
  0 siblings, 1 reply; 5+ messages in thread
From: Peter Zijlstra @ 2017-10-27 15:33 UTC (permalink / raw)
  To: syzbot
  Cc: acme, alexander.shishkin, linux-kernel, mingo, syzkaller-bugs,
	Thomas Gleixner, Steven Rostedt, viro

On Fri, Oct 27, 2017 at 05:11:37PM +0200, Peter Zijlstra wrote:
> On Fri, Oct 27, 2017 at 01:30:30AM -0700, syzbot wrote:
> 
> > ======================================================
> > WARNING: possible circular locking dependency detected
> > 4.13.0-next-20170911+ #19 Not tainted
> > ------------------------------------------------------
> > syz-executor2/12380 is trying to acquire lock:
> >  (&ctx->mutex){+.+.}, at: [<ffffffff8180923c>]
> > perf_event_ctx_lock_nested+0x1dc/0x3c0 kernel/events/core.c:1210
> > 
> > but task is already holding lock:
> >  (&pipe->mutex/1){+.+.}, at: [<ffffffff81ac0fa6>] pipe_lock_nested
> > fs/pipe.c:66 [inline]
> >  (&pipe->mutex/1){+.+.}, at: [<ffffffff81ac0fa6>] pipe_lock+0x56/0x70
> > fs/pipe.c:74
> > 
> > which lock already depends on the new lock.
> 
> 
> ARRGH!!
> 
> that translates like the below, which is an absolute maze and requires
> at least 5 concurrent callstacks, possibly more.
> 
> We already had a lot of fun with hotplug-perf-ftrace, but the below
> contains more. Let me try and page that previous crap back.
> 
> 
> 
> 	perf_ioctl()
> #0	  perf_event_ctx_lock()		[ctx->mutex]
> 	  perf_event_set_filter  
> #1	    ftrace_profile_set_filter   [event_mutex]
> 		
> 
> 
> 
> 	sys_perf_event_open
> 	  ...
> 	  perf_trace_init
> #1	    mutex_lock [event_mutex]
> 	    trace_event_reg
> 	      tracepoint_probe_register
> #2		mutex_lock() [tracepoints_mutex]
> 		tracepoint_add_func()
> #3		  static_key_slow_inc()	[cpuhotplug_lock]
> 	
> 
> 
> 
> 
> 	cpuhp_setup_state_nocalls
> #3	  cpus_read_lock	[cpuhotplug_lock]
> 	    __cpuhp_setup_state_cpuslocked
> #4	      mutex_lock	[cpuhp_state_mutex]
> 	      cpuhp_issue_call
> #5		cpuhp_invoke_ap_callback()	[cpuhp_state]
> 
> 
> #5	cpuhp_invoke_callback [cpuhp_state]
> 	  ...
> 	    devtmpfs_create_node
> #6	      wait_for_completion()	[&req.done]
> 
> 						devtmpfsd
> 						  handle_create
> #7						    filename_create [sb_writers]
> #6						  complete [&req.done]
> 							
> 
> 
> 
> 
> 
> 	do_splice
> #7	  file_start_write() [sb_writers]
> 	  do_splice_from
> 	    iter_file_splice_write
> #8	      pipe_lock [pipe->mutex]
> 
> 
> 
> 
> 
> 	do_splice
> #8	  pipe_lock	[pipe->mutex]
> 	  do_splice_to
> 	    ...
> #0	      perf_read() [ctx->mutex]
> 		

So arguably that last op, splice_read from a perf fd is fairly
pointless and we could dis-allow that. How about something like the
below?

---
 kernel/events/core.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/kernel/events/core.c b/kernel/events/core.c
index 04989fb769f0..fd03f3082ee3 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -5468,6 +5468,13 @@ static int perf_fasync(int fd, struct file *filp, int on)
 	return 0;
 }
 
+static ssize_t perf_splice_read(struct file *file, loff_t *ppos,
+				struct pipe_inode_info *pope, size_t len,
+				unsigned int flags)
+{
+	return -EOPNOTSUPP;
+}
+
 static const struct file_operations perf_fops = {
 	.llseek			= no_llseek,
 	.release		= perf_release,
@@ -5477,6 +5484,7 @@ static int perf_fasync(int fd, struct file *filp, int on)
 	.compat_ioctl		= perf_compat_ioctl,
 	.mmap			= perf_mmap,
 	.fasync			= perf_fasync,
+	.splice_read		= perf_splice_read,
 };
 
 /*

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: possible deadlock in perf_event_ctx_lock_nested
  2017-10-27 15:33   ` Peter Zijlstra
@ 2018-02-12 15:04     ` Dmitry Vyukov
  2018-02-14 14:00       ` Dmitry Vyukov
  0 siblings, 1 reply; 5+ messages in thread
From: Dmitry Vyukov @ 2018-02-12 15:04 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: syzbot, Arnaldo Carvalho de Melo, Alexander Shishkin, LKML,
	Ingo Molnar, syzkaller-bugs, Thomas Gleixner, Steven Rostedt,
	Al Viro

On Fri, Oct 27, 2017 at 5:33 PM, Peter Zijlstra <peterz@infradead.org> wrote:
> On Fri, Oct 27, 2017 at 05:11:37PM +0200, Peter Zijlstra wrote:
>> On Fri, Oct 27, 2017 at 01:30:30AM -0700, syzbot wrote:
>>
>> > ======================================================
>> > WARNING: possible circular locking dependency detected
>> > 4.13.0-next-20170911+ #19 Not tainted
>> > ------------------------------------------------------
>> > syz-executor2/12380 is trying to acquire lock:
>> >  (&ctx->mutex){+.+.}, at: [<ffffffff8180923c>]
>> > perf_event_ctx_lock_nested+0x1dc/0x3c0 kernel/events/core.c:1210
>> >
>> > but task is already holding lock:
>> >  (&pipe->mutex/1){+.+.}, at: [<ffffffff81ac0fa6>] pipe_lock_nested
>> > fs/pipe.c:66 [inline]
>> >  (&pipe->mutex/1){+.+.}, at: [<ffffffff81ac0fa6>] pipe_lock+0x56/0x70
>> > fs/pipe.c:74
>> >
>> > which lock already depends on the new lock.
>>
>>
>> ARRGH!!
>>
>> that translates like the below, which is an absolute maze and requires
>> at least 5 concurrent callstacks, possibly more.
>>
>> We already had a lot of fun with hotplug-perf-ftrace, but the below
>> contains more. Let me try and page that previous crap back.
>>
>>
>>
>>       perf_ioctl()
>> #0      perf_event_ctx_lock()         [ctx->mutex]
>>         perf_event_set_filter
>> #1        ftrace_profile_set_filter   [event_mutex]
>>
>>
>>
>>
>>       sys_perf_event_open
>>         ...
>>         perf_trace_init
>> #1        mutex_lock [event_mutex]
>>           trace_event_reg
>>             tracepoint_probe_register
>> #2            mutex_lock() [tracepoints_mutex]
>>               tracepoint_add_func()
>> #3              static_key_slow_inc() [cpuhotplug_lock]
>>
>>
>>
>>
>>
>>       cpuhp_setup_state_nocalls
>> #3      cpus_read_lock        [cpuhotplug_lock]
>>           __cpuhp_setup_state_cpuslocked
>> #4          mutex_lock        [cpuhp_state_mutex]
>>             cpuhp_issue_call
>> #5            cpuhp_invoke_ap_callback()      [cpuhp_state]
>>
>>
>> #5    cpuhp_invoke_callback [cpuhp_state]
>>         ...
>>           devtmpfs_create_node
>> #6          wait_for_completion()     [&req.done]
>>
>>                                               devtmpfsd
>>                                                 handle_create
>> #7                                                filename_create [sb_writers]
>> #6                                              complete [&req.done]
>>
>>
>>
>>
>>
>>
>>       do_splice
>> #7      file_start_write() [sb_writers]
>>         do_splice_from
>>           iter_file_splice_write
>> #8          pipe_lock [pipe->mutex]
>>
>>
>>
>>
>>
>>       do_splice
>> #8      pipe_lock     [pipe->mutex]
>>         do_splice_to
>>           ...
>> #0          perf_read() [ctx->mutex]
>>
>
> So arguably that last op, splice_read from a perf fd is fairly
> pointless and we could dis-allow that. How about something like the
> below?
>
> ---
>  kernel/events/core.c | 8 ++++++++
>  1 file changed, 8 insertions(+)
>
> diff --git a/kernel/events/core.c b/kernel/events/core.c
> index 04989fb769f0..fd03f3082ee3 100644
> --- a/kernel/events/core.c
> +++ b/kernel/events/core.c
> @@ -5468,6 +5468,13 @@ static int perf_fasync(int fd, struct file *filp, int on)
>         return 0;
>  }
>
> +static ssize_t perf_splice_read(struct file *file, loff_t *ppos,
> +                               struct pipe_inode_info *pope, size_t len,
> +                               unsigned int flags)
> +{
> +       return -EOPNOTSUPP;
> +}
> +
>  static const struct file_operations perf_fops = {
>         .llseek                 = no_llseek,
>         .release                = perf_release,
> @@ -5477,6 +5484,7 @@ static int perf_fasync(int fd, struct file *filp, int on)
>         .compat_ioctl           = perf_compat_ioctl,
>         .mmap                   = perf_mmap,
>         .fasync                 = perf_fasync,
> +       .splice_read            = perf_splice_read,
>  };


Another perf deadlock that involves splice:
https://groups.google.com/d/msg/syzkaller-bugs/vVy6Zj3wPxo/6oj5U6WiAwAJ

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: possible deadlock in perf_event_ctx_lock_nested
  2018-02-12 15:04     ` Dmitry Vyukov
@ 2018-02-14 14:00       ` Dmitry Vyukov
  0 siblings, 0 replies; 5+ messages in thread
From: Dmitry Vyukov @ 2018-02-14 14:00 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: syzbot, Arnaldo Carvalho de Melo, Alexander Shishkin, LKML,
	Ingo Molnar, syzkaller-bugs, Thomas Gleixner, Steven Rostedt,
	Al Viro

On Mon, Feb 12, 2018 at 4:04 PM, Dmitry Vyukov <dvyukov@google.com> wrote:
>>> > ======================================================
>>> > WARNING: possible circular locking dependency detected
>>> > 4.13.0-next-20170911+ #19 Not tainted
>>> > ------------------------------------------------------
>>> > syz-executor2/12380 is trying to acquire lock:
>>> >  (&ctx->mutex){+.+.}, at: [<ffffffff8180923c>]
>>> > perf_event_ctx_lock_nested+0x1dc/0x3c0 kernel/events/core.c:1210
>>> >
>>> > but task is already holding lock:
>>> >  (&pipe->mutex/1){+.+.}, at: [<ffffffff81ac0fa6>] pipe_lock_nested
>>> > fs/pipe.c:66 [inline]
>>> >  (&pipe->mutex/1){+.+.}, at: [<ffffffff81ac0fa6>] pipe_lock+0x56/0x70
>>> > fs/pipe.c:74
>>> >
>>> > which lock already depends on the new lock.
>>>
>>>
>>> ARRGH!!
>>>
>>> that translates like the below, which is an absolute maze and requires
>>> at least 5 concurrent callstacks, possibly more.
>>>
>>> We already had a lot of fun with hotplug-perf-ftrace, but the below
>>> contains more. Let me try and page that previous crap back.
>>>
>>>
>>>
>>>       perf_ioctl()
>>> #0      perf_event_ctx_lock()         [ctx->mutex]
>>>         perf_event_set_filter
>>> #1        ftrace_profile_set_filter   [event_mutex]
>>>
>>>
>>>
>>>
>>>       sys_perf_event_open
>>>         ...
>>>         perf_trace_init
>>> #1        mutex_lock [event_mutex]
>>>           trace_event_reg
>>>             tracepoint_probe_register
>>> #2            mutex_lock() [tracepoints_mutex]
>>>               tracepoint_add_func()
>>> #3              static_key_slow_inc() [cpuhotplug_lock]
>>>
>>>
>>>
>>>
>>>
>>>       cpuhp_setup_state_nocalls
>>> #3      cpus_read_lock        [cpuhotplug_lock]
>>>           __cpuhp_setup_state_cpuslocked
>>> #4          mutex_lock        [cpuhp_state_mutex]
>>>             cpuhp_issue_call
>>> #5            cpuhp_invoke_ap_callback()      [cpuhp_state]
>>>
>>>
>>> #5    cpuhp_invoke_callback [cpuhp_state]
>>>         ...
>>>           devtmpfs_create_node
>>> #6          wait_for_completion()     [&req.done]
>>>
>>>                                               devtmpfsd
>>>                                                 handle_create
>>> #7                                                filename_create [sb_writers]
>>> #6                                              complete [&req.done]
>>>
>>>
>>>
>>>
>>>
>>>
>>>       do_splice
>>> #7      file_start_write() [sb_writers]
>>>         do_splice_from
>>>           iter_file_splice_write
>>> #8          pipe_lock [pipe->mutex]
>>>
>>>
>>>
>>>
>>>
>>>       do_splice
>>> #8      pipe_lock     [pipe->mutex]
>>>         do_splice_to
>>>           ...
>>> #0          perf_read() [ctx->mutex]
>>>
>>
>> So arguably that last op, splice_read from a perf fd is fairly
>> pointless and we could dis-allow that. How about something like the
>> below?
>>
>> ---
>>  kernel/events/core.c | 8 ++++++++
>>  1 file changed, 8 insertions(+)
>>
>> diff --git a/kernel/events/core.c b/kernel/events/core.c
>> index 04989fb769f0..fd03f3082ee3 100644
>> --- a/kernel/events/core.c
>> +++ b/kernel/events/core.c
>> @@ -5468,6 +5468,13 @@ static int perf_fasync(int fd, struct file *filp, int on)
>>         return 0;
>>  }
>>
>> +static ssize_t perf_splice_read(struct file *file, loff_t *ppos,
>> +                               struct pipe_inode_info *pope, size_t len,
>> +                               unsigned int flags)
>> +{
>> +       return -EOPNOTSUPP;
>> +}
>> +
>>  static const struct file_operations perf_fops = {
>>         .llseek                 = no_llseek,
>>         .release                = perf_release,
>> @@ -5477,6 +5484,7 @@ static int perf_fasync(int fd, struct file *filp, int on)
>>         .compat_ioctl           = perf_compat_ioctl,
>>         .mmap                   = perf_mmap,
>>         .fasync                 = perf_fasync,
>> +       .splice_read            = perf_splice_read,
>>  };
>
>
> Another perf deadlock that involves splice:
> https://groups.google.com/d/msg/syzkaller-bugs/vVy6Zj3wPxo/6oj5U6WiAwAJ


Peter,

What's the status of this? Is your patch still relevant?

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2018-02-14 14:00 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <001a11448f6c0346ec055c831a71@google.com>
2017-10-27  8:31 ` possible deadlock in perf_event_ctx_lock_nested Dmitry Vyukov
2017-10-27 15:11 ` Peter Zijlstra
2017-10-27 15:33   ` Peter Zijlstra
2018-02-12 15:04     ` Dmitry Vyukov
2018-02-14 14:00       ` Dmitry Vyukov

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.