linux-nvme.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: James Smart <james.smart@broadcom.com>
To: Daniel Wagner <dwagner@suse.de>
Cc: linux-nvme@lists.infradead.org
Subject: Re: [PATCH v2 2/3] nvme-fc: eliminate terminate_io use by nvme_fc_error_recovery
Date: Mon, 23 Nov 2020 13:44:50 -0800	[thread overview]
Message-ID: <5ac00957-7b26-e69b-ab9b-3f83bfcb1498@broadcom.com> (raw)
In-Reply-To: <20201119105105.czsz4tmjrizzqlex@beryllium.lan>


[-- Attachment #1.1: Type: text/plain, Size: 3740 bytes --]



On 11/19/2020 2:51 AM, Daniel Wagner wrote:
> Hi James,
>
> On Fri, Oct 23, 2020 at 03:27:51PM -0700, James Smart wrote:
>> nvme_fc_error_recovery() special cases handling when in CONNECTING state
>> and calls __nvme_fc_terminate_io(). __nvme_fc_terminate_io() itself
>> special cases CONNECTING state and calls the routine to abort outstanding
>> ios.
>>
>> Simplify the sequence by putting the call to abort outstanding ios directly
>> in nvme_fc_error_recovery.
>>
>> Move the location of __nvme_fc_abort_outstanding_ios(), and
>> nvme_fc_terminate_exchange() which is called by it, to avoid adding
>> function prototypes for nvme_fc_error_recovery().
> During local testing I run into this problem:
>
>   BUG: scheduling while atomic: swapper/37/0/0x00000100
>   Modules linked in: iscsi_ibft(E) iscsi_boot_sysfs(E) rfkill(E) intel_rapl_msr(E) intel_rapl_common(E) sb_edac(E) x86_pkg_temp_thermal(E) intel_powerclamp(E) ext4(E) nls_iso8859_1(E) coretemp(E) nls_cp437(E) crc16(E) kvm_intel(E) mbcache(E) jbd2(E) kvm(E) vfat(E) irqbypass(E) crc32_pclmul(E) fat(E) ghash_clmulni_intel(E) iTCO_wdt(E) lpfc(E) iTCO_vendor_support(E) aesni_intel(E) nvmet_fc(E) aes_x86_64(E) ipmi_ssif(E) crypto_simd(E) nvmet(E) bnx2x(E) cryptd(E) glue_helper(E) pcspkr(E) lpc_ich(E) ipmi_si(E) tg3(E) mdio(E) ioatdma(E) hpilo(E) mfd_core(E) hpwdt(E) ipmi_devintf(E) configfs(E) libphy(E) dca(E) ipmi_msghandler(E) button(E) btrfs(E) libcrc32c(E) xor(E) raid6_pq(E) mgag200(E) drm_vram_helper(E) sd_mod(E) ttm(E) i2c_algo_bit(E) qla2xxx(E) drm_kms_helper(E) syscopyarea(E) nvme_fc(E) sysfillrect(E) sysimgblt(E) nvme_fabrics(E) uhci_hcd(E) fb_sys_fops(E) ehci_pci(E) ehci_hcd(E) nvme_core(E) crc32c_intel(E) scsi_transport_fc(E) drm(E) usbcore(E) hpsa(E) scsi_transport_sas(E)
>    wmi(E) sg(E) dm_multipath(E) dm_mod(E) scsi_dh_rdac(E) scsi_dh_emc(E) scsi_dh_alua(E) scsi_mod(E) efivarfs(E)
>   Supported: No, Unreleased kernel
>   CPU: 37 PID: 0 Comm: swapper/37 Tainted: G            EL      5.3.18-0.g7362c5c-default #1 SLE15-SP2 (unreleased)
>   Hardware name: HP ProLiant DL580 Gen9/ProLiant DL580 Gen9, BIOS U17 10/21/2019
>   Call Trace:
>    <IRQ>
>    dump_stack+0x66/0x8b
>    __schedule_bug+0x51/0x70
>    __schedule+0x697/0x750
>    schedule+0x2f/0xa0
>    schedule_timeout+0x1dd/0x300
>    ? lpfc_sli4_fp_handle_fcp_wcqe.isra.31+0x146/0x390 [lpfc]
>    ? update_group_capacity+0x25/0x1b0
>    wait_for_completion+0xba/0x140
>    ? wake_up_q+0xa0/0xa0
>    __wait_rcu_gp+0x110/0x130
>    synchronize_rcu+0x55/0x80
>    ? __call_rcu+0x4e0/0x4e0
>    ? __bpf_trace_rcu_invoke_callback+0x10/0x10
>    __nvme_fc_abort_outstanding_ios+0x5f/0x90 [nvme_fc]
>    nvme_fc_error_recovery+0x25/0x70 [nvme_fc]
>    nvme_fc_fcpio_done+0x243/0x400 [nvme_fc]
>    lpfc_sli4_nvme_xri_aborted+0x62/0x100 [lpfc]
>    lpfc_sli4_sp_handle_abort_xri_wcqe.isra.56+0x4c/0x170 [lpfc]
>    ? lpfc_sli4_fp_handle_cqe+0x8b/0x490 [lpfc]
>    lpfc_sli4_fp_handle_cqe+0x8b/0x490 [lpfc]
>    __lpfc_sli4_process_cq+0xfd/0x270 [lpfc]
>    ? lpfc_sli4_sp_handle_abort_xri_wcqe.isra.56+0x170/0x170 [lpfc]
>    __lpfc_sli4_hba_process_cq+0x3c/0x110 [lpfc]
>    lpfc_cq_poll_hdler+0x16/0x20 [lpfc]
>    irq_poll_softirq+0x88/0x110
>    __do_softirq+0xe3/0x2dc
>    irq_exit+0xd5/0xe0
>    do_IRQ+0x7f/0xd0
>    common_interrupt+0xf/0xf
>    </IRQ>
>
>
> I think we can't move the __nvme_fc_abort_outstanding_ios() into this
> path as we are still running in IRQ context.
>
> Thanks,
> Daniel
>

Daniel,

I agree with you. This was brought about by lpfc converting to use to 
blk_irq poll.   I'll put something together for the transport, as it is 
likely reasonable to expect use of blk_irq

-- james



[-- Attachment #1.2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4163 bytes --]

[-- Attachment #2: Type: text/plain, Size: 158 bytes --]

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

  reply	other threads:[~2020-11-23 21:42 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-10-23 22:27 [PATCH v2 0/3] nvme-fc: clean up error recovery implementation James Smart
2020-10-23 22:27 ` [PATCH v2 1/3] nvme-fc: remove err_work work item James Smart
2020-10-23 22:27 ` [PATCH v2 2/3] nvme-fc: eliminate terminate_io use by nvme_fc_error_recovery James Smart
2020-11-19 10:51   ` Daniel Wagner
2020-11-23 21:44     ` James Smart [this message]
2020-10-23 22:27 ` [PATCH v2 3/3] nvme-fc: remove nvme_fc_terminate_io() James Smart
2020-10-27  9:07 ` [PATCH v2 0/3] nvme-fc: clean up error recovery implementation Christoph Hellwig

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5ac00957-7b26-e69b-ab9b-3f83bfcb1498@broadcom.com \
    --to=james.smart@broadcom.com \
    --cc=dwagner@suse.de \
    --cc=linux-nvme@lists.infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).