From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.1 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2709AC282C3 for ; Fri, 25 Jan 2019 02:30:01 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id D23DE218D0 for ; Fri, 25 Jan 2019 02:30:00 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="YClh1V0c" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728683AbfAYC37 (ORCPT ); Thu, 24 Jan 2019 21:29:59 -0500 Received: from mail-pf1-f195.google.com ([209.85.210.195]:35581 "EHLO mail-pf1-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728372AbfAYC37 (ORCPT ); Thu, 24 Jan 2019 21:29:59 -0500 Received: by mail-pf1-f195.google.com with SMTP id z9so3998379pfi.2 for ; Thu, 24 Jan 2019 18:29:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=subject:to:cc:references:from:message-id:date:user-agent :mime-version:in-reply-to:content-language:content-transfer-encoding; bh=H9hQi11vG9RIBZ7vmK+i6z7j0DwE6AmkH/d1bBAkSO4=; b=YClh1V0cPXflCPPZdSK7luclCF53r9d/9wfCUcSg8+rGXdFX6IEriqf0B+NUpWeRdi qP0VTfPjqgvaJTMrWbl/yVxa8ureMKB0XcFMISo2NnTo5NU6GXXNqKCtvQ3AZoVyQ1Vp W+IOYEhJcsv8qen2cAYXaNf0o1czF7YA8QE5PXcCWPMQwi3YdW1iIa7X5JV8G+lvX3l4 yFYAwsCgmwvPzPzoJkk+q88oIL2UfkKTanRlTJf170sUobF4/K+tc3PTi2Znd5MxbaxN VVqV58/M2OxZZCrP1Ub8K0v1oMUMP0zX/XlXaaA2ZMyEHLOEalvzMcNJaZndCxKRyluI wI9w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:subject:to:cc:references:from:message-id:date :user-agent:mime-version:in-reply-to:content-language :content-transfer-encoding; bh=H9hQi11vG9RIBZ7vmK+i6z7j0DwE6AmkH/d1bBAkSO4=; b=Ye3E7TvP5xYFK2Nvn7ur5wzXhurP3vdX/z/DIlqqB/8f6EEqitPGTunHfu0sRHgZB3 Kz0x2zboS0hm0X9O9FSdRJp1G9oi8IhMIS8i9PJo3SV+tD2UHCrmCVnF4l2lhSWO2NQq CK91u8h4ZDE+BDYi2mxjK9OQN86i1PGvwBqmJMYpsSEnXSns3rlYKTfjghOfK67zZ0AU O95QuUJjhPuKFkQ9jxokgHimb9G1OJuRjS6v6F8QEgErR+K2x3bH+FSfAu6REMyiQIrP cEoETkOJ8EWMkz6VUtJ0uZt8TX5v/A+v14ZWfcTWbq40fTFl9Xg20XQgOCabzcz9xRsw 5XVg== X-Gm-Message-State: AJcUukf+CFoDLUjdVgg4wNVYmpW17Tdb++B4y7wrWi2KP9AcNMYCa+GJ QIqBm4yYrMMhh4kQY4MJ/59zWMiN X-Google-Smtp-Source: ALg8bN4JXn8YNBCA5ywTLN3m3Mcrz6zMQsHbi9vqX9iawvFdwlhHy3MKv3DPIkAcWwLZRGyRj+kRWQ== X-Received: by 2002:a63:4819:: with SMTP id v25mr8237641pga.308.1548383397911; Thu, 24 Jan 2019 18:29:57 -0800 (PST) Received: from [192.168.86.235] (c-73-241-150-70.hsd1.ca.comcast.net. [73.241.150.70]) by smtp.gmail.com with ESMTPSA id 78sm36920119pft.184.2019.01.24.18.29.56 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 24 Jan 2019 18:29:57 -0800 (PST) Subject: Re: [PATCH v4 bpf-next 1/9] bpf: introduce bpf_spin_lock To: Alexei Starovoitov , Peter Zijlstra Cc: Alexei Starovoitov , davem@davemloft.net, daniel@iogearbox.net, jakub.kicinski@netronome.com, netdev@vger.kernel.org, kernel-team@fb.com, mingo@redhat.com, will.deacon@arm.com, Paul McKenney , jannh@google.com References: <20190124041403.2100609-1-ast@kernel.org> <20190124041403.2100609-2-ast@kernel.org> <20190124180109.GA27771@hirez.programming.kicks-ass.net> <20190124235857.xyb5xx2ufr6x5mbt@ast-mbp.dhcp.thefacebook.com> From: Eric Dumazet Message-ID: <395a3741-70c9-c345-08a4-77bc3bd3cae2@gmail.com> Date: Thu, 24 Jan 2019 18:29:55 -0800 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.9.1 MIME-Version: 1.0 In-Reply-To: <20190124235857.xyb5xx2ufr6x5mbt@ast-mbp.dhcp.thefacebook.com> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org On 01/24/2019 03:58 PM, Alexei Starovoitov wrote: > On Thu, Jan 24, 2019 at 07:01:09PM +0100, Peter Zijlstra wrote: >> and from NMI ... > > progs are not preemptable and map syscall accessors have bpf_prog_active counters. > So nmi/kprobe progs will not be running when syscall is running. > Hence dead lock is not possible and irq_save is not needed. Speaking of NMI, how pcpu_freelist_push() and pop() can actually work ? It seems bpf_get_stackid() can be called from NMI, and lockdep seems to complain loudly lpaa5:/export/hda3/google/edumazet# ./test_progs test_pkt_access:PASS:ipv4 48 nsec test_pkt_access:PASS:ipv6 49 nsec test_prog_run_xattr:PASS:load 0 nsec test_prog_run_xattr:PASS:run 316 nsec test_prog_run_xattr:PASS:data_size_out 316 nsec test_prog_run_xattr:PASS:overflow 316 nsec test_prog_run_xattr:PASS:run_no_output 431 nsec test_prog_run_xattr:PASS:run_wrong_size_out 431 nsec test_xdp:PASS:ipv4 1397 nsec test_xdp:PASS:ipv6 523 nsec test_xdp_adjust_tail:PASS:ipv4 382 nsec test_xdp_adjust_tail:PASS:ipv6 210 nsec test_l4lb:PASS:ipv4 143 nsec test_l4lb:PASS:ipv6 164 nsec libbpf: incorrect bpf_call opcode libbpf: incorrect bpf_call opcode test_tcp_estats:PASS: 0 nsec test_bpf_obj_id:PASS:get-fd-by-notexist-prog-id 0 nsec test_bpf_obj_id:PASS:get-fd-by-notexist-map-id 0 nsec test_bpf_obj_id:PASS:get-map-info(fd) 0 nsec test_bpf_obj_id:PASS:get-prog-info(fd) 0 nsec test_bpf_obj_id:PASS:get-map-info(fd) 0 nsec test_bpf_obj_id:PASS:get-prog-info(fd) 0 nsec test_bpf_obj_id:PASS:get-prog-fd(next_id) 0 nsec test_bpf_obj_id:PASS:get-prog-fd-bad-nr-map-ids 0 nsec test_bpf_obj_id:PASS:get-prog-info(next_id->fd) 0 nsec test_bpf_obj_id:PASS:get-prog-fd(next_id) 0 nsec test_bpf_obj_id:PASS:get-prog-fd-bad-nr-map-ids 0 nsec test_bpf_obj_id:PASS:get-prog-info(next_id->fd) 0 nsec test_bpf_obj_id:PASS:check total prog id found by get_next_id 0 nsec test_bpf_obj_id:PASS:get-map-fd(next_id) 0 nsec test_bpf_obj_id:PASS:get-map-fd(next_id) 0 nsec test_bpf_obj_id:PASS:get-map-fd(next_id) 0 nsec test_bpf_obj_id:PASS:get-map-fd(next_id) 0 nsec test_bpf_obj_id:PASS:get-map-fd(next_id) 0 nsec test_bpf_obj_id:PASS:get-map-fd(next_id) 0 nsec test_bpf_obj_id:PASS:get-map-fd(next_id) 0 nsec test_bpf_obj_id:PASS:get-map-fd(next_id) 0 nsec test_bpf_obj_id:PASS:get-map-fd(next_id) 0 nsec test_bpf_obj_id:PASS:check get-map-info(next_id->fd) 0 nsec test_bpf_obj_id:PASS:get-map-fd(next_id) 0 nsec test_bpf_obj_id:PASS:check get-map-info(next_id->fd) 0 nsec test_bpf_obj_id:PASS:check total map id found by get_next_id 0 nsec test_pkt_md_access:PASS: 76 nsec test_obj_name:PASS:check-bpf-prog-name 0 nsec test_obj_name:PASS:check-bpf-map-name 0 nsec test_obj_name:PASS:check-bpf-prog-name 0 nsec test_obj_name:PASS:check-bpf-map-name 0 nsec test_obj_name:PASS:check-bpf-prog-name 0 nsec test_obj_name:PASS:check-bpf-map-name 0 nsec test_obj_name:PASS:check-bpf-prog-name 0 nsec test_obj_name:PASS:check-bpf-map-name 0 nsec test_tp_attach_query:PASS:open 0 nsec test_tp_attach_query:PASS:read 0 nsec test_tp_attach_query:PASS:prog_load 0 nsec test_tp_attach_query:PASS:bpf_obj_get_info_by_fd 0 nsec test_tp_attach_query:PASS:perf_event_open 0 nsec test_tp_attach_query:PASS:perf_event_ioc_enable 0 nsec test_tp_attach_query:PASS:perf_event_ioc_query_bpf 0 nsec test_tp_attach_query:PASS:perf_event_ioc_set_bpf 0 nsec test_tp_attach_query:PASS:perf_event_ioc_query_bpf 0 nsec test_tp_attach_query:PASS:perf_event_ioc_query_bpf 0 nsec test_tp_attach_query:PASS:prog_load 0 nsec test_tp_attach_query:PASS:bpf_obj_get_info_by_fd 0 nsec test_tp_attach_query:PASS:perf_event_open 0 nsec test_tp_attach_query:PASS:perf_event_ioc_enable 0 nsec test_tp_attach_query:PASS:perf_event_ioc_set_bpf 0 nsec test_tp_attach_query:PASS:perf_event_ioc_query_bpf 0 nsec test_tp_attach_query:PASS:perf_event_ioc_query_bpf 0 nsec test_tp_attach_query:PASS:perf_event_ioc_query_bpf 0 nsec test_tp_attach_query:PASS:perf_event_ioc_query_bpf 0 nsec test_tp_attach_query:PASS:perf_event_ioc_query_bpf 0 nsec test_tp_attach_query:PASS:perf_event_ioc_query_bpf 0 nsec test_tp_attach_query:PASS:prog_load 0 nsec test_tp_attach_query:PASS:bpf_obj_get_info_by_fd 0 nsec test_tp_attach_query:PASS:perf_event_open 0 nsec test_tp_attach_query:PASS:perf_event_ioc_enable 0 nsec test_tp_attach_query:PASS:perf_event_ioc_set_bpf 0 nsec test_tp_attach_query:PASS:perf_event_ioc_query_bpf 0 nsec test_tp_attach_query:PASS:perf_event_ioc_query_bpf 0 nsec test_tp_attach_query:PASS:perf_event_ioc_query_bpf 0 nsec test_tp_attach_query:PASS:perf_event_ioc_query_bpf 0 nsec test_stacktrace_map:PASS:prog_load 0 nsec test_stacktrace_map:PASS:open 0 nsec test_stacktrace_map:PASS:perf_event_open 0 nsec test_stacktrace_map:PASS:compare_map_keys stackid_hmap vs. stackmap 0 nsec test_stacktrace_map:PASS:compare_map_keys stackmap vs. stackid_hmap 0 nsec test_stacktrace_map:PASS:compare_stack_ips stackmap vs. stack_amap 0 nsec test_stacktrace_build_id:PASS:prog_load 0 nsec test_stacktrace_build_id:PASS:open 0 nsec test_stacktrace_build_id:PASS:read 0 nsec test_stacktrace_build_id:PASS:perf_event_open 0 nsec test_stacktrace_build_id:PASS:perf_event_ioc_enable 0 nsec test_stacktrace_build_id:PASS:perf_event_ioc_set_bpf 0 nsec test_stacktrace_build_id:PASS:bpf_find_map control_map 0 nsec test_stacktrace_build_id:PASS:bpf_find_map stackid_hmap 0 nsec test_stacktrace_build_id:PASS:bpf_find_map stackmap 0 nsec test_stacktrace_build_id:PASS:bpf_find_map stack_amap 0 nsec test_stacktrace_build_id:PASS:compare_map_keys stackid_hmap vs. stackmap 0 nsec test_stacktrace_build_id:PASS:compare_map_keys stackmap vs. stackid_hmap 0 nsec test_stacktrace_build_id:PASS:get build_id with readelf 0 nsec test_stacktrace_build_id:PASS:get_next_key from stackmap 0 nsec test_stacktrace_build_id:PASS:lookup_elem from stackmap 0 nsec test_stacktrace_build_id:PASS:lookup_elem from stackmap 0 nsec test_stacktrace_build_id:PASS:build id match 0 nsec test_stacktrace_build_id:PASS:compare_stack_ips stackmap vs. stack_amap 0 nsec test_stacktrace_build_id_nmi:PASS:prog_load 0 nsec test_stacktrace_build_id_nmi:PASS:perf_event_open 0 nsec test_stacktrace_build_id_nmi:PASS:perf_event_ioc_enable 0 nsec test_stacktrace_build_id_nmi:PASS:perf_event_ioc_set_bpf 0 nsec test_stacktrace_build_id_nmi:PASS:bpf_find_map control_map 0 nsec test_stacktrace_build_id_nmi:PASS:bpf_find_map stackid_hmap 0 nsec test_stacktrace_build_id_nmi:PASS:bpf_find_map stackmap 0 nsec test_stacktrace_build_id_nmi:PASS:bpf_find_map stack_amap 0 nsec test_stacktrace_build_id_nmi:PASS:compare_map_keys stackid_hmap vs. stackmap 0 nsec test_stacktrace_build_id_nmi:PASS:compare_map_keys stackmap vs. stackid_hmap 0 nsec test_stacktrace_build_id_nmi:PASS:get build_id with readelf 0 nsec test_stacktrace_build_id_nmi:PASS:get_next_key from stackmap 0 nsec test_stacktrace_build_id_nmi:PASS:lookup_elem from stackmap 0 nsec test_stacktrace_build_id_nmi:PASS:lookup_elem from stackmap 0 nsec test_stacktrace_build_id_nmi:PASS:lookup_elem from stackmap 0 nsec test_stacktrace_build_id_nmi:PASS:lookup_elem from stackmap 0 nsec test_stacktrace_build_id_nmi:PASS:build id match 0 nsec test_stacktrace_map_raw_tp:PASS:prog_load raw tp 0 nsec test_stacktrace_map_raw_tp:PASS:raw_tp_open 0 nsec test_stacktrace_map_raw_tp:PASS:compare_map_keys stackid_hmap vs. stackmap 0 nsec test_stacktrace_map_raw_tp:PASS:compare_map_keys stackmap vs. stackid_hmap 0 nsec test_get_stack_raw_tp:PASS:prog_load raw tp 0 nsec test_get_stack_raw_tp:PASS:raw_tp_open 0 nsec test_get_stack_raw_tp:PASS:bpf_find_map 0 nsec test_get_stack_raw_tp:PASS:load_kallsyms 0 nsec test_get_stack_raw_tp:PASS:perf_event_open 0 nsec test_get_stack_raw_tp:PASS:bpf_map_update_elem 0 nsec test_get_stack_raw_tp:PASS:ioctl PERF_EVENT_IOC_ENABLE 0 nsec test_get_stack_raw_tp:PASS:perf_event_mmap 0 nsec test_get_stack_raw_tp:PASS:perf_event_poller 0 nsec test_task_fd_query_rawtp:PASS:prog_load raw tp 0 nsec test_task_fd_query_rawtp:PASS:raw_tp_open 0 nsec test_task_fd_query_rawtp:PASS:bpf_task_fd_query 0 nsec test_task_fd_query_rawtp:PASS:check_results 0 nsec test_task_fd_query_rawtp:PASS:bpf_task_fd_query (len = 0) 0 nsec test_task_fd_query_rawtp:PASS:check_results 0 nsec test_task_fd_query_rawtp:PASS:bpf_task_fd_query (buf = 0) 0 nsec test_task_fd_query_rawtp:PASS:check_results 0 nsec test_task_fd_query_rawtp:PASS:bpf_task_fd_query (len = 3) 0 nsec test_task_fd_query_rawtp:PASS:check_results 0 nsec test_task_fd_query_tp_core:PASS:bpf_prog_load 0 nsec test_task_fd_query_tp_core:PASS:open 0 nsec test_task_fd_query_tp_core:PASS:read 0 nsec test_task_fd_query_tp_core:PASS:perf_event_open 0 nsec test_task_fd_query_tp_core:PASS:perf_event_ioc_enable 0 nsec test_task_fd_query_tp_core:PASS:perf_event_ioc_set_bpf 0 nsec test_task_fd_query_tp_core:PASS:bpf_task_fd_query 0 nsec test_task_fd_query_tp_core:PASS:check_results 0 nsec test_task_fd_query_tp_core:PASS:bpf_prog_load 0 nsec test_task_fd_query_tp_core:PASS:open 0 nsec test_task_fd_query_tp_core:PASS:read 0 nsec test_task_fd_query_tp_core:PASS:perf_event_open 0 nsec test_task_fd_query_tp_core:PASS:perf_event_ioc_enable 0 nsec test_task_fd_query_tp_core:PASS:perf_event_ioc_set_bpf 0 nsec test_task_fd_query_tp_core:PASS:bpf_task_fd_query 0 nsec test_task_fd_query_tp_core:PASS:check_results 0 nsec test_reference_tracking:PASS:sk_lookup_success 0 nsec test_reference_tracking:PASS:sk_lookup_success_simple 0 nsec test_reference_tracking:PASS:fail_use_after_free 0 nsec test_reference_tracking:PASS:fail_modify_sk_pointer 0 nsec test_reference_tracking:PASS:fail_modify_sk_or_null_pointer 0 nsec test_reference_tracking:PASS:fail_no_release 0 nsec test_reference_tracking:PASS:fail_release_twice 0 nsec test_reference_tracking:PASS:fail_release_unchecked 0 nsec test_reference_tracking:PASS:fail_no_release_subcall 0 nsec test_queue_stack_map:PASS:bpf_map_pop_elem 342 nsec test_queue_stack_map:PASS:check-queue-stack-map-empty 323 nsec test_queue_stack_map:PASS:bpf_map_push_elem 323 nsec test_queue_stack_map:PASS:bpf_map_pop_elem 325 nsec test_queue_stack_map:PASS:check-queue-stack-map-empty 311 nsec test_queue_stack_map:PASS:bpf_map_push_elem 311 nsec Summary: 175 PASSED, 2 FAILED [ 429.727565] ======================================================== [ 429.733916] WARNING: possible irq lock inversion dependency detected [ 429.740282] 5.0.0-dbg-DEV #550 Not tainted [ 429.744381] -------------------------------------------------------- [ 429.750743] dd/16374 just changed the state of lock: [ 429.755751] 0000000062c2321f (&head->lock){+...}, at: pcpu_freelist_push+0x28/0x50 [ 429.763348] but this lock was taken by another, HARDIRQ-safe lock in the past: [ 429.770609] (&rq->lock){-.-.} [ 429.770610] and interrupts could create inverse lock ordering between them. [ 429.785089] other info that might help us debug this: [ 429.791630] Possible interrupt unsafe locking scenario: [ 429.798432] CPU0 CPU1 [ 429.802964] ---- ---- [ 429.807513] lock(&head->lock); [ 429.810746] local_irq_disable(); [ 429.816700] lock(&rq->lock); [ 429.822322] lock(&head->lock); [ 429.828094] [ 429.830724] lock(&rq->lock); [ 429.833977] *** DEADLOCK *** [ 429.839929] 1 lock held by dd/16374: [ 429.843526] #0: 00000000a4c09748 (rcu_read_lock){....}, at: trace_call_bpf+0x38/0x1f0 [ 429.851498] the shortest dependencies between 2nd lock and 1st lock: [ 429.859348] -> (&rq->lock){-.-.} { [ 429.862847] IN-HARDIRQ-W at: [ 429.866091] lock_acquire+0xa7/0x190 [ 429.871531] _raw_spin_lock+0x2f/0x40 [ 429.877051] scheduler_tick+0x51/0x100 [ 429.882621] update_process_times+0x6f/0x90 [ 429.888660] tick_periodic+0x2b/0xc0 [ 429.894065] tick_handle_periodic+0x25/0x70 [ 429.900096] timer_interrupt+0x15/0x20 [ 429.905729] __handle_irq_event_percpu+0x44/0x2b0 [ 429.912265] handle_irq_event+0x60/0xc0 [ 429.917924] handle_edge_irq+0x8b/0x220 [ 429.923570] handle_irq+0x21/0x30 [ 429.928724] do_IRQ+0x64/0x120 [ 429.933613] ret_from_intr+0x0/0x1d [ 429.938935] timer_irq_works+0x5b/0xfd [ 429.944528] setup_IO_APIC+0x378/0x82b [ 429.950101] apic_intr_mode_init+0x16d/0x172 [ 429.956202] x86_late_time_init+0x15/0x1c [ 429.962049] start_kernel+0x44b/0x4fe [ 429.967532] x86_64_start_reservations+0x24/0x26 [ 429.973988] x86_64_start_kernel+0x6f/0x72 [ 429.979942] secondary_startup_64+0xa4/0xb0 [ 429.985960] IN-SOFTIRQ-W at: [ 429.989183] lock_acquire+0xa7/0x190 [ 429.994582] _raw_spin_lock+0x2f/0x40 [ 430.000067] try_to_wake_up+0x1ef/0x610 [ 430.005740] wake_up_process+0x15/0x20 [ 430.011314] swake_up_one+0x36/0x70 [ 430.016648] rcu_gp_kthread_wake+0x3c/0x40 [ 430.022591] rcu_accelerate_cbs_unlocked+0x8c/0xd0 [ 430.029219] rcu_process_callbacks+0xdd/0xc50 [ 430.035434] __do_softirq+0x105/0x465 [ 430.040952] irq_exit+0xc8/0xd0 [ 430.045932] smp_apic_timer_interrupt+0xa5/0x230 [ 430.052387] apic_timer_interrupt+0xf/0x20 [ 430.058331] clear_page_erms+0x7/0x10 [ 430.063825] __alloc_pages_nodemask+0x165/0x390 [ 430.070201] dsalloc_pages+0x65/0x90 [ 430.075657] reserve_ds_buffers+0x136/0x500 [ 430.081679] x86_reserve_hardware+0x16d/0x180 [ 430.087874] x86_pmu_event_init+0x4b/0x210 [ 430.093828] perf_try_init_event+0x8f/0xb0 [ 430.099744] perf_event_alloc+0xa1d/0xc90 [ 430.105576] perf_event_create_kernel_counter+0x24/0x150 [ 430.112721] hardlockup_detector_event_create+0x41/0x90 [ 430.119785] hardlockup_detector_perf_enable+0xe/0x40 [ 430.126653] watchdog_nmi_enable+0xe/0x20 [ 430.132501] watchdog_enable+0xb8/0xd0 [ 430.138097] softlockup_start_fn+0x15/0x20 [ 430.144015] smp_call_on_cpu_callback+0x2a/0x60 [ 430.150393] process_one_work+0x1f4/0x580 [ 430.156231] worker_thread+0x6f/0x430 [ 430.161708] kthread+0x132/0x170 [ 430.166791] ret_from_fork+0x24/0x30 [ 430.172180] INITIAL USE at: [ 430.175334] lock_acquire+0xa7/0x190 [ 430.180656] _raw_spin_lock_irqsave+0x3a/0x50 [ 430.186763] rq_attach_root+0x1d/0xd0 [ 430.192153] sched_init+0x30b/0x40d [ 430.197409] start_kernel+0x283/0x4fe [ 430.202825] x86_64_start_reservations+0x24/0x26 [ 430.209181] x86_64_start_kernel+0x6f/0x72 [ 430.215032] secondary_startup_64+0xa4/0xb0 [ 430.220964] } [ 430.222724] ... key at: [] __key.71964+0x0/0x8 [ 430.229345] ... acquired at: [ 430.232404] _raw_spin_lock+0x2f/0x40 [ 430.236260] pcpu_freelist_pop+0x77/0xf0 [ 430.240349] bpf_get_stackid+0x1c4/0x440 [ 430.244465] bpf_get_stackid_tp+0x11/0x20 [ 430.250156] -> (&head->lock){+...} { [ 430.253742] HARDIRQ-ON-W at: [ 430.256878] lock_acquire+0xa7/0x190 [ 430.262095] _raw_spin_lock+0x2f/0x40 [ 430.267397] pcpu_freelist_push+0x28/0x50 [ 430.273088] bpf_get_stackid+0x41e/0x440 [ 430.278677] bpf_get_stackid_tp+0x11/0x20 [ 430.284350] INITIAL USE at: [ 430.287411] lock_acquire+0xa7/0x190 [ 430.292591] _raw_spin_lock+0x2f/0x40 [ 430.297833] pcpu_freelist_populate+0xc3/0x120 [ 430.303840] htab_map_alloc+0x41e/0x550 [ 430.309245] __do_sys_bpf+0x27e/0x1b50 [ 430.314591] __x64_sys_bpf+0x1a/0x20 [ 430.319738] do_syscall_64+0x5a/0x460 [ 430.324962] entry_SYSCALL_64_after_hwframe+0x49/0xbe [ 430.331598] } [ 430.333274] ... key at: [] __key.13162+0x0/0x8 [ 430.339821] ... acquired at: [ 430.342795] mark_lock+0x3c4/0x630 [ 430.346371] __lock_acquire+0x3b2/0x1850 [ 430.350493] lock_acquire+0xa7/0x190 [ 430.354282] _raw_spin_lock+0x2f/0x40 [ 430.358129] pcpu_freelist_push+0x28/0x50 [ 430.362331] bpf_get_stackid+0x41e/0x440 [ 430.366438] bpf_get_stackid_tp+0x11/0x20 [ 430.372128] stack backtrace: [ 430.376479] CPU: 29 PID: 16374 Comm: dd Not tainted 5.0.0-dbg-DEV #550 [ 430.383008] Hardware name: Intel RML,PCH/Iota_QC_19, BIOS 2.54.0 06/07/2018 [ 430.389967] Call Trace: [ 430.392433] dump_stack+0x67/0x95 [ 430.395756] print_irq_inversion_bug.part.38+0x1b8/0x1c4 [ 430.401066] check_usage_backwards+0x156/0x160 [ 430.405523] mark_lock+0x3c4/0x630 [ 430.408925] ? mark_lock+0x3c4/0x630 [ 430.412496] ? print_shortest_lock_dependencies+0x1b0/0x1b0 [ 430.418092] __lock_acquire+0x3b2/0x1850 [ 430.422018] ? find_get_entry+0x1b1/0x320 [ 430.426050] ? find_get_entry+0x1d0/0x320 [ 430.430063] lock_acquire+0xa7/0x190 [ 430.433667] ? lock_acquire+0xa7/0x190 [ 430.437433] ? pcpu_freelist_push+0x28/0x50 [ 430.441635] _raw_spin_lock+0x2f/0x40 [ 430.445317] ? pcpu_freelist_push+0x28/0x50 [ 430.449494] pcpu_freelist_push+0x28/0x50 [ 430.453513] bpf_get_stackid+0x41e/0x440 [ 430.457446] bpf_get_stackid_tp+0x11/0x20 [ 430.461450] ? trace_call_bpf+0xe4/0x1f0 [ 430.465365] ? perf_trace_run_bpf_submit+0x42/0xb0 [ 430.470155] ? perf_trace_urandom_read+0xbf/0x100 [ 430.474851] ? urandom_read+0x20f/0x350 [ 430.478685] ? vfs_read+0xb8/0x190 [ 430.482096] ? __x64_sys_read+0x61/0xd0 [ 430.485949] ? do_syscall_64+0x21/0x460 [ 430.489797] ? do_syscall_64+0x5a/0x460 [ 430.493635] ? entry_SYSCALL_64_after_hwframe+0x49/0xbe [ 451.715758] perf: interrupt took too long (2516 > 2500), lowering kernel.perf_event_max_sample_rate to 79000