From mboxrd@z Thu Jan 1 00:00:00 1970 From: Robert LeBlanc Subject: Re: iscsi_trx going into D state Date: Tue, 27 Dec 2016 13:22:28 -0700 Message-ID: References: <4fc72e32-26fb-96bd-8a0d-814eef712b43@suse.com> <1476774332.8490.43.camel@haakon3.risingtidesystems.com> <1476858359.8490.97.camel@haakon3.risingtidesystems.com> <1477780190.22703.47.camel@haakon3.risingtidesystems.com> <6beeb461-0391-cb29-8cf7-74f4c8feda34@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Return-path: In-Reply-To: <6beeb461-0391-cb29-8cf7-74f4c8feda34@redhat.com> Sender: linux-scsi-owner@vger.kernel.org To: Doug Ledford Cc: "Nicholas A. Bellinger" , Zhu Lingshan , linux-rdma , linux-scsi@vger.kernel.org, Sagi Grimberg , Christoph Hellwig List-Id: linux-rdma@vger.kernel.org I looked at this code and it is quiet above my ability. I created this patch, but I don't know how to interrogate the queue to see how many items there are. If you can give me some more direction on what to try, I can keep fumbling around with this until someone smarter than me can figure it out. This is now a blocker for me so I'm going to beat my head on this until it is fixed. Thanks for being patient with me. diff --git a/drivers/infiniband/core/verbs.c b/drivers/infiniband/core/verbs.c index 8368764..9e5bd4b 100644 --- a/drivers/infiniband/core/verbs.c +++ b/drivers/infiniband/core/verbs.c @@ -1954,22 +1954,27 @@ static void __ib_drain_sq(struct ib_qp *qp) return; } + printk("Setting up drain callback."); swr.wr_cqe = &sdrain.cqe; sdrain.cqe.done = ib_drain_qp_done; + printk("Starting init_completion."); init_completion(&sdrain.done); + printk("Calling ib_modify_qp."); ret = ib_modify_qp(qp, &attr, IB_QP_STATE); if (ret) { WARN_ONCE(ret, "failed to drain send queue: %d\n", ret); return; } + printk("Calling ib_post_send."); ret = ib_post_send(qp, &swr, &bad_swr); if (ret) { WARN_ONCE(ret, "failed to drain send queue: %d\n", ret); return; } + printk("Starting wait_for_completion."); wait_for_completion(&sdrain.done); } I get the same processes in D state (and same backtrace) and this is what shows up in dmesg: [ 920.317401] isert: isert_rdma_accept: rdma_accept() failed with: -110 [ 920.325554] ------------[ cut here ]------------ [ 920.330188] WARNING: CPU: 11 PID: 705 at drivers/infiniband/core/verbs.c:303 ib_dealloc_pd+0x58/0xa0 [ib_core] [ 920.340210] Modules linked in: target_core_user target_core_pscsi target_core_file target_core_iblock 8021q garp mrp rpcrdma sunrpc ib_isert ib_iser ib_srpt ib_srp scsi_transport_srp ib_ipoib iptable_filter rdma_ucm i b_ucm ib_uverbs ib_umad rdma_cm ib_cm iw_cm sb_edac edac_core x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel kvm ext4 ipmi_devintf irqbypass crct10dif_pclmul crc32_pclmul ghash_clmulni_intel aesni_intel jbd2 lr w gf128mul glue_helper mbcache iTCO_wdt ablk_helper mei_me iTCO_vendor_support cryptd joydev sg mei i2c_i801 lpc_ich pcspkr mfd_core ioatdma shpchp i2c_smbus ipmi_si wmi ipmi_msghandler acpi_power_meter acpi_pad ip_table s xfs libcrc32c raid1 mlx4_en mlx4_ib mlx5_ib sd_mod ib_core ast drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops ttm mlx5_core igb mlx4_core [ 920.412347] ahci ptp drm libahci pps_core libata dca i2c_algo_bit be2iscsi bnx2i cnic uio qla4xxx iscsi_boot_sysfs [ 920.421744] CPU: 11 PID: 705 Comm: kworker/11:2 Not tainted 4.9.0+ #3 [ 920.428199] Hardware name: Supermicro SYS-6028TP-HTFR/X10DRT-PIBF, BIOS 1.1 08/03/2015 [ 920.436126] Workqueue: ib_cm cm_work_handler [ib_cm] [ 920.441113] ffffc90032a03a40 ffffffff8134d45f 0000000000000000 0000000000000000 [ 920.448583] ffffc90032a03a80 ffffffff81083371 0000012fa04a1c4a ffff883f5e886e80 [ 920.456073] ffff887f1eaa4400 ffff887f1eaa5800 ffffc90032a03b08 00000000ffffff92 [ 920.463535] Call Trace: [ 920.465993] [] dump_stack+0x63/0x84 [ 920.471144] [] __warn+0xd1/0xf0 [ 920.475941] [] warn_slowpath_null+0x1d/0x20 [ 920.481790] [] ib_dealloc_pd+0x58/0xa0 [ib_core] [ 920.488072] [] isert_device_put+0x50/0xc0 [ib_isert] [ 920.494693] [] isert_connect_request+0x68e/0xd40 [ib_isert] [ 920.501924] [] isert_cma_handler+0xe3/0x3b0 [ib_isert] [ 920.508725] [] ? cma_new_conn_id+0x276/0x4b0 [rdma_cm] [ 920.515521] [] cma_listen_handler+0x20/0x30 [rdma_cm] [ 920.522227] [] cma_req_handler+0x1f5/0x4c0 [rdma_cm] [ 920.528851] [] cm_process_work+0x25/0xf0 [ib_cm] [ 920.535125] [] cm_req_handler+0x8d4/0xc70 [ib_cm] [ 920.541485] [] cm_work_handler+0x1ce/0x1648 [ib_cm] [ 920.548021] [] process_one_work+0x152/0x400 [ 920.553861] [] worker_thread+0x125/0x4b0 [ 920.559443] [] ? rescuer_thread+0x380/0x380 [ 920.565284] [] kthread+0xd9/0xf0 [ 920.570178] [] ? kthread_park+0x60/0x60 [ 920.576389] [] ret_from_fork+0x25/0x30 [ 920.582473] ---[ end trace 1f5a1831f9d2d964 ]--- [ 920.587907] ------------[ cut here ]------------ [ 920.593213] WARNING: CPU: 11 PID: 705 at drivers/infiniband/core/cq.c:189 ib_free_cq+0x97/0xc0 [ib_core] [ 920.603383] Modules linked in: target_core_user target_core_pscsi target_core_file target_core_iblock 8021q garp mrp rpcrdma sunrpc ib_isert ib_iser ib_srpt ib_srp scsi_transport_srp ib_ipoib iptable_filter rdma_ucm i b_ucm ib_uverbs ib_umad rdma_cm ib_cm iw_cm sb_edac edac_core x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel kvm ext4 ipmi_devintf irqbypass crct10dif_pclmul crc32_pclmul ghash_clmulni_intel aesni_intel jbd2 lr w gf128mul glue_helper mbcache iTCO_wdt ablk_helper mei_me iTCO_vendor_support cryptd joydev sg mei i2c_i801 lpc_ich pcspkr mfd_core ioatdma shpchp i2c_smbus ipmi_si wmi ipmi_msghandler acpi_power_meter acpi_pad ip_table s xfs libcrc32c raid1 mlx4_en mlx4_ib mlx5_ib sd_mod ib_core ast drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops ttm mlx5_core igb mlx4_core [ 920.679694] ahci ptp drm libahci pps_core libata dca i2c_algo_bit be2iscsi bnx2i cnic uio qla4xxx iscsi_boot_sysfs [ 920.690579] CPU: 11 PID: 705 Comm: kworker/11:2 Tainted: G W 4.9.0+ #3 [ 920.699008] Hardware name: Supermicro SYS-6028TP-HTFR/X10DRT-PIBF, BIOS 1.1 08/03/2015 [ 920.707701] Workqueue: ib_cm cm_work_handler [ib_cm] [ 920.713438] ffffc90032a03a18 ffffffff8134d45f 0000000000000000 0000000000000000 [ 920.721648] ffffc90032a03a58 ffffffff81083371 000000bd5e886e80 ffff887f1eaa6800 [ 920.729850] ffff883f5e886e20 ffff883f5e886e18 ffffc90032a03b08 00000000ffffff92 [ 920.738026] Call Trace: [ 920.741188] [] dump_stack+0x63/0x84 [ 920.747027] [] __warn+0xd1/0xf0 [ 920.752488] [] warn_slowpath_null+0x1d/0x20 [ 920.758989] [] ib_free_cq+0x97/0xc0 [ib_core] [ 920.765649] [] isert_free_comps.isra.26+0x38/0x60 [ib_isert] [ 920.773609] [] isert_device_put+0x5d/0xc0 [ib_isert] [ 920.780868] [] isert_connect_request+0x68e/0xd40 [ib_isert] [ 920.788734] [] isert_cma_handler+0xe3/0x3b0 [ib_isert] [ 920.796157] [] ? cma_new_conn_id+0x276/0x4b0 [rdma_cm] [ 920.803586] [] cma_listen_handler+0x20/0x30 [rdma_cm] [ 920.810916] [] cma_req_handler+0x1f5/0x4c0 [rdma_cm] [ 920.818167] [] cm_process_work+0x25/0xf0 [ib_cm] [ 920.825063] [] cm_req_handler+0x8d4/0xc70 [ib_cm] [ 920.832051] [] cm_work_handler+0x1ce/0x1648 [ib_cm] [ 920.839208] [] process_one_work+0x152/0x400 [ 920.845669] [] worker_thread+0x125/0x4b0 [ 920.851880] [] ? rescuer_thread+0x380/0x380 [ 920.858352] [] kthread+0xd9/0xf0 [ 920.863857] [] ? kthread_park+0x60/0x60 [ 920.869975] [] ret_from_fork+0x25/0x30 [ 920.876006] ---[ end trace 1f5a1831f9d2d965 ]--- [ 920.884335] isert: isert_cma_handler: failed handle connect request -110 [ 1639.592451] Setting up drain callback. [ 1639.596073] Starting init_completion. [ 1639.600683] Calling ib_modify_qp. [ 1639.602616] Calling ib_post_send. [ 1639.606550] Starting wait_for_completion. [ 1656.976015] iSCSI Login timeout on Network Portal 0.0.0.0:3260 [ 1674.254027] Setting up drain callback. [ 1674.257634] Starting init_completion. [ 1674.262107] Calling ib_modify_qp. [ 1674.264011] Calling ib_post_send. [ 1674.267969] Starting wait_for_completion. [ 1691.583888] Setting up drain callback. [ 1691.588490] Starting init_completion. [ 1691.590677] Calling ib_modify_qp. [ 1691.594766] Calling ib_post_send. [ 1691.596607] Starting wait_for_completion. [ 1708.913356] Setting up drain callback. [ 1708.915658] Starting init_completion. [ 1708.920152] Calling ib_modify_qp. [ 1708.922041] Calling ib_post_send. [ 1708.926048] Starting wait_for_completion. [ 1726.244365] Setting up drain callback. [ 1726.248973] Starting init_completion. [ 1726.251165] Calling ib_modify_qp. [ 1726.255189] Calling ib_post_send. [ 1726.257031] Starting wait_for_completion. [ 1743.574751] Setting up drain callback. [ 1743.577044] Starting init_completion. [ 1743.581496] Calling ib_modify_qp. [ 1743.583404] Calling ib_post_send. [ 1743.587346] Starting wait_for_completion. [ 1760.904470] Setting up drain callback. [ 1760.908991] Starting init_completion. [ 1760.911206] Calling ib_modify_qp. [ 1760.915214] Calling ib_post_send. [ 1760.917062] Starting wait_for_completion. [ 1778.230821] Setting up drain callback. [ 1778.233116] Starting init_completion. [ 1778.237510] Calling ib_modify_qp. [ 1778.239413] Calling ib_post_send. .... [keeps repeating] ---------------- Robert LeBlanc PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1 On Thu, Dec 22, 2016 at 12:15 PM, Doug Ledford wrote: > On 12/21/2016 6:39 PM, Robert LeBlanc wrote: >> I hit a new backtrace today, hopefully it adds something. >> >> # cat /proc/19659/stack >> [] iscsit_stop_session+0x1b1/0x1c0 >> [] iscsi_check_for_session_reinstatement+0x1e2/0x270 >> [] iscsi_target_check_for_existing_instances+0x30/0x40 >> [] iscsi_target_do_login+0x138/0x630 >> [] iscsi_target_start_negotiation+0x4e/0xa0 >> [] __iscsi_target_login_thread+0x83e/0xf20 >> [] iscsi_target_login_thread+0x24/0x30 >> [] kthread+0xd9/0xf0 >> [] ret_from_fork+0x25/0x30 >> [] 0xffffffffffffffff >> >> # cat /proc/21342/stack >> [] __ib_drain_sq+0x190/0x1c0 [ib_core] >> [] ib_drain_sq+0x25/0x30 [ib_core] >> [] ib_drain_qp+0x12/0x30 [ib_core] >> [] isert_wait_conn+0x5f/0x2d0 [ib_isert] >> [] iscsit_close_connection+0x157/0x860 >> [] iscsit_take_action_for_connection_exit+0x7b/0xf0 >> [] iscsi_target_rx_thread+0x95/0xa0 >> [] kthread+0xd9/0xf0 >> [] ret_from_fork+0x25/0x30 >> [] 0xffffffffffffffff >> >> # ps aux | grep iscsi | grep D >> root 19659 0.0 0.0 0 0 ? D 16:12 0:00 [iscsi_np] >> root 21342 0.0 0.0 0 0 ? D 16:29 0:00 [iscsi_trx] >> ---------------- >> Robert LeBlanc >> PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1 > > That looks suspiciously like the __ib_drain_sq is stuck forever waiting > on a completion that never comes. > >> >> On Thu, Dec 15, 2016 at 1:38 PM, Robert LeBlanc wrote: >>> Nicholas, >>> >>> I've found that the kernels I used were not able to be inspected using >>> crash and I could not build the debug info for them. So I built a new >>> 4.9 kernel and verified that I could inspect the crash. It is located >>> at [1]. >>> >>> [1] http://mirrors.betterservers.com/trace/crash2.tar.xz >>> ---------------- >>> Robert LeBlanc >>> PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1 >>> >>> >>> On Mon, Dec 12, 2016 at 4:57 PM, Robert LeBlanc wrote: >>>> Nicholas, >>>> >>>> After lots of set backs and having to give up trying to get kernel >>>> dumps on our "production" systems, I've been able to work out the >>>> issues we had with kdump and replicate the issue on my dev boxes. I >>>> have dumps from 4.4.30 and 4.9-rc8 (makedumpfile would not dump, so it >>>> is a straight copy of /proc/vmcore from the crash kernel). In each >>>> crash directory, I put a details.txt file that has the process IDs >>>> that were having problems and a brief description of the set-up at the >>>> time. This was mostly replicated by starting fio and pulling the >>>> Infiniband cable until fio gave up. This hardware also has Mellanox >>>> ConnectX4-LX cards and I also replicated the issue over RoCE using 4.9 >>>> since it has the drivers in-box. Please let me know if you need more >>>> info, I can test much faster now. The cores/kernels/modules are >>>> located at [1]. >>>> >>>> [1] http://mirrors.betterservers.com/trace/crash.tar.xz >>>> >>>> Thanks, >>>> Robert >>>> ---------------- >>>> Robert LeBlanc >>>> PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1 >>>> >>>> >>>> On Fri, Nov 4, 2016 at 3:57 PM, Robert LeBlanc wrote: >>>>> We hit this yesterday, this time it was on the tx thread (the other >>>>> ones before seem to be on the rx thread). We weren't able to get a >>>>> kernel dump on this. We'll try to get one next time. >>>>> >>>>> # ps axuw | grep "D.*iscs[i]" >>>>> root 12383 0.0 0.0 0 0 ? D Nov03 0:04 [iscsi_np] >>>>> root 23016 0.0 0.0 0 0 ? D Nov03 0:00 [iscsi_ttx] >>>>> root 23018 0.0 0.0 0 0 ? D Nov03 0:00 [iscsi_ttx] >>>>> # cat /proc/12383/stack >>>>> [] iscsit_stop_session+0x19f/0x1d0 >>>>> [] iscsi_check_for_session_reinstatement+0x1e6/0x270 >>>>> [] iscsi_target_check_for_existing_instances+0x30/0x40 >>>>> [] iscsi_target_do_login+0x140/0x640 >>>>> [] iscsi_target_start_negotiation+0x1c/0xb0 >>>>> [] iscsi_target_login_thread+0xa9b/0xfc0 >>>>> [] kthread+0xd8/0xf0 >>>>> [] ret_from_fork+0x3f/0x70 >>>>> [] 0xffffffffffffffff >>>>> # cat /proc/23016/stack >>>>> [] target_wait_for_sess_cmds+0x49/0x1a0 >>>>> [] isert_wait_conn+0x1ab/0x2f0 [ib_isert] >>>>> [] iscsit_close_connection+0x162/0x870 >>>>> [] iscsit_take_action_for_connection_exit+0x7f/0x100 >>>>> [] iscsi_target_tx_thread+0x1aa/0x1d0 >>>>> [] kthread+0xd8/0xf0 >>>>> [] ret_from_fork+0x3f/0x70 >>>>> [] 0xffffffffffffffff >>>>> # cat /proc/23018/stack >>>>> [] target_wait_for_sess_cmds+0x49/0x1a0 >>>>> [] isert_wait_conn+0x1ab/0x2f0 [ib_isert] >>>>> [] iscsit_close_connection+0x162/0x870 >>>>> [] iscsit_take_action_for_connection_exit+0x7f/0x100 >>>>> [] iscsi_target_tx_thread+0x1aa/0x1d0 >>>>> [] kthread+0xd8/0xf0 >>>>> [] ret_from_fork+0x3f/0x70 >>>>> [] 0xffffffffffffffff >>>>> >>>>> From dmesg: >>>>> [ 394.476332] INFO: rcu_sched self-detected stall on CPU >>>>> [ 394.476334] 20-...: (23976 ticks this GP) >>>>> idle=edd/140000000000001/0 softirq=292/292 fqs=18788 >>>>> [ 394.476336] (t=24003 jiffies g=3146 c=3145 q=0) >>>>> [ 394.476337] Task dump for CPU 20: >>>>> [ 394.476338] kworker/u68:2 R running task 0 12906 2 0x00000008 >>>>> [ 394.476345] Workqueue: isert_comp_wq isert_cq_work [ib_isert] >>>>> [ 394.476346] ffff883f2fe38000 00000000f805705e ffff883f7fd03da8 >>>>> ffffffff810ac8ff >>>>> [ 394.476347] 0000000000000014 ffffffff81adb680 ffff883f7fd03dc0 >>>>> ffffffff810af239 >>>>> [ 394.476348] 0000000000000015 ffff883f7fd03df0 ffffffff810e1cd0 >>>>> ffff883f7fd17b80 >>>>> [ 394.476348] Call Trace: >>>>> [ 394.476354] [] sched_show_task+0xaf/0x110 >>>>> [ 394.476355] [] dump_cpu_task+0x39/0x40 >>>>> [ 394.476357] [] rcu_dump_cpu_stacks+0x80/0xb0 >>>>> [ 394.476359] [] rcu_check_callbacks+0x540/0x820 >>>>> [ 394.476360] [] ? account_system_time+0x81/0x110 >>>>> [ 394.476363] [] ? tick_sched_do_timer+0x50/0x50 >>>>> [ 394.476364] [] update_process_times+0x39/0x60 >>>>> [ 394.476365] [] tick_sched_handle.isra.17+0x25/0x60 >>>>> [ 394.476366] [] tick_sched_timer+0x3d/0x70 >>>>> [ 394.476368] [] __hrtimer_run_queues+0x102/0x290 >>>>> [ 394.476369] [] hrtimer_interrupt+0xa8/0x1a0 >>>>> [ 394.476372] [] local_apic_timer_interrupt+0x35/0x60 >>>>> [ 394.476374] [] smp_apic_timer_interrupt+0x3d/0x50 >>>>> [ 394.476376] [] apic_timer_interrupt+0x87/0x90 >>>>> [ 394.476379] [] ? console_unlock+0x41e/0x4e0 >>>>> [ 394.476380] [] vprintk_emit+0x2fc/0x500 >>>>> [ 394.476382] [] vprintk_default+0x1f/0x30 >>>>> [ 394.476384] [] printk+0x5d/0x74 >>>>> [ 394.476388] [] transport_lookup_cmd_lun+0x1d1/0x200 >>>>> [ 394.476390] [] iscsit_setup_scsi_cmd+0x230/0x540 >>>>> [ 394.476392] [] isert_rx_do_work+0x3f3/0x7f0 [ib_isert] >>>>> [ 394.476394] [] isert_cq_work+0x184/0x770 [ib_isert] >>>>> [ 394.476396] [] process_one_work+0x14f/0x400 >>>>> [ 394.476397] [] worker_thread+0x114/0x470 >>>>> [ 394.476398] [] ? __schedule+0x34a/0x7f0 >>>>> [ 394.476399] [] ? rescuer_thread+0x310/0x310 >>>>> [ 394.476400] [] kthread+0xd8/0xf0 >>>>> [ 394.476402] [] ? kthread_park+0x60/0x60 >>>>> [ 394.476403] [] ret_from_fork+0x3f/0x70 >>>>> [ 394.476404] [] ? kthread_park+0x60/0x60 >>>>> [ 405.716632] Unexpected ret: -104 send data 360 >>>>> [ 405.721711] tx_data returned -32, expecting 360. >>>>> ---------------- >>>>> Robert LeBlanc >>>>> PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1 > > When you combine this trace with the newest one, it really makes me > thing there is something of a bad interaction between the new drain cq > API and the iser/isert implementation to use said API. Sagi, Christoph? > > -- > Doug Ledford > GPG Key ID: B826A3330E572FDD > Key fingerprint = AE6B 1BDA 122B 23B4 265B 1274 B826 A333 0E57 2FDD >