linux-nvme.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: Daniel Wagner <dwagner@suse.de>
To: Arun Easi <aeasi@marvell.com>
Cc: Roman Bolshakov <r.bolshakov@yadro.com>,
	linux-scsi@vger.kernel.org,
	GR-QLogic-Storage-Upstream@marvell.com,
	linux-nvme@lists.infradead.org, Hannes Reinecke <hare@suse.de>,
	Nilesh Javali <njavali@marvell.com>,
	James Smart <james.smart@broadcom.com>
Subject: Re: [EXT] Re: [RFC] qla2xxx: Add dev_loss_tmo kernel module options
Date: Wed, 21 Apr 2021 09:56:59 +0200	[thread overview]
Message-ID: <20210421075659.dwaz7gt6hyqlzpo4@beryllium.lan> (raw)
In-Reply-To: <alpine.LRH.2.21.9999.2104201642290.24132@irv1user01.caveonetworks.com>

Hi Arun,

On Tue, Apr 20, 2021 at 05:25:52PM -0700, Arun Easi wrote:
> On Tue, 20 Apr 2021, 11:28am, Daniel Wagner wrote:
> > As explained the debugfs interface is not working (okay, that's
> > something which could be fixed) and it has the big problem that it is
> > not under control by udevd. Not sure if we with some new udev rules the
> > debugfs could automatically discovered or not.
>
> Curious, which udev script does this today for FC SCSI?

I am currently figuring out the 'correct' settings for passing the
various tests our partner does in their labs. That is no upstream udev
rules for this (yet).

Anyway, the settings are

  ACTION!="add|change", GOTO="tmo_end"
  # SCSI fc_remote_ports
  KERNELS=="rport-?*", SUBSYSTEM=="fc_remote_ports", ATTR{fast_io_fail_tmo}="5", ATTR{dev_loss_tmo}="4294967295"
  # nvme fabrics
  KERNELS=="ctl", SUBSYSTEMS=="nvme-fabrics", ATTR{transport}=="fc", ATTR{fast_io_fail_tmo}="-1", ATTR{ctrl_loss_tmo}="-1"
  LABEL="tmo_end"

and this works for lpfc but only half for qla2xxx.

> In theory, the exsting fc nvmediscovery udev event has enough information
> to find out the right qla2xxx debugfs node and set dev_loss_tmo.

Ah, didn't know about nvmediscovery until very recentetly. I try to get
it working with this approach (as this patch is not really a proper
solution).

> > > What exists for FCP/SCSI is quite clear and reasonable. I don't know why
> > > FC-NVMe rports should be way too different.
> >
> > The lpfc driver does expose the FCP/SCSI and the FC-NVMe rports nicely
> > via the fc_remote_ports and this is what I would like to have from the
> > qla2xxx driver as well. qla2xxx exposes the FCP/SCSI rports but not the
> > FC-NVMe rports.
> >
>
> Given that FC NVME does not have sysfs hierarchy like FC SCSI, I see
> utility in making FC-NVME ports available via fc_remote_ports. If, though,
> a FC target port is dual protocol aware this would leave with only one
> knob to control both.

So far I haven't had the need to distinguish between the two protocols
for the dev_loss_tmo setting. I think this is what Hannes was also
trying to tell, it might make sense to introduce sysfs APIs per
protocol.

> I think, going with fc_remote_ports is better than introducing one more
> way (like this patch) to set this.

As I said this patch was really a RFC. I will experiment with
nvmediscovery. Though, I think this is just a stopgap solution. Having
two completely different ways to configure the same thing is sub optimal
and it is asking for a lot of troubles with end customers. I am really
hoping we could streamline the current APIs, so we have only one
recommended way to configure the system independent of the driver
involved.

Thanks,
Daniel

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

  reply	other threads:[~2021-04-21  7:57 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-04-19 10:00 [RFC] qla2xxx: Add dev_loss_tmo kernel module options Daniel Wagner
2021-04-19 16:19 ` Randy Dunlap
2021-04-20 12:37   ` Daniel Wagner
2021-04-20 14:51     ` Himanshu Madhani
2021-04-20 15:35     ` Randy Dunlap
2021-04-20 17:27 ` Benjamin Block
2021-04-20 17:35 ` Roman Bolshakov
2021-04-20 18:28   ` Daniel Wagner
2021-04-21  0:25     ` [EXT] " Arun Easi
2021-04-21  7:56       ` Daniel Wagner [this message]
2021-04-27  9:51         ` Daniel Wagner
2021-04-27 22:35           ` Arun Easi
2021-04-28  7:17             ` Daniel Wagner
2021-04-28 14:51       ` James Smart

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210421075659.dwaz7gt6hyqlzpo4@beryllium.lan \
    --to=dwagner@suse.de \
    --cc=GR-QLogic-Storage-Upstream@marvell.com \
    --cc=aeasi@marvell.com \
    --cc=hare@suse.de \
    --cc=james.smart@broadcom.com \
    --cc=linux-nvme@lists.infradead.org \
    --cc=linux-scsi@vger.kernel.org \
    --cc=njavali@marvell.com \
    --cc=r.bolshakov@yadro.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).