All of lore.kernel.org
 help / color / mirror / Atom feed
From: hch@infradead.org (Christoph Hellwig)
Subject: [PATCH v3 5/7] nvme-fabrics: Add host support for FC transport
Date: Tue, 25 Oct 2016 09:47:06 -0700	[thread overview]
Message-ID: <20161025164706.GE2446@infradead.org> (raw)
In-Reply-To: <580d0ffd.cq5y7S+lNBHxNs60%james.smart@broadcom.com>

> +config NVME_FC
> +	tristate "NVM Express over Fabrics FC host driver"
> +	depends on BLOCK
> +	select NVME_CORE
> +	select NVME_FABRICS
> +	select SG_POOL

needs HAS_DMA, but I already saw the patch for that.

> +	/*
> +	 * WARNING:
> +	 * The current linux implementation of a nvme controller
> +	 * allocates a single tag set for all io queues and sizes
> +	 * the io queues to fully hold all possible tags. Thus, the
> +	 * implementation does not reference or care about the sqhd
> +	 * value as it never needs to use the sqhd/sqtail pointers
> +	 * for submission pacing.

I've just send out two patches that get rid of the concept of a CQE
in the core.  I think all this code would benefit a lot from being
rebase on top of that.

> +		if (unlikely(be16_to_cpu(op->rsp_iu.iu_len) !=
> +				(freq->rcv_rsplen/4)) ||
> +		    unlikely(op->rqno != le16_to_cpu(cqe->command_id))) {

nitpick: missing whitespace around the operand, and not needed braces
around the condition.

Also we usually use a single unlikely for the whole condition.

> +static void
> +nvme_fc_kill_request(struct request *req, void *data, bool reserved)
> +{
> +	struct nvme_ctrl *nctrl = data;
> +	struct nvme_fc_ctrl *ctrl = to_fc_ctrl(nctrl);
> +	struct nvme_fc_fcp_op *op = blk_mq_rq_to_pdu(req);
> +	int status;
> +
> +	if (!blk_mq_request_started(req))
> +		return;
> +
> +	status = __nvme_fc_abort_op(ctrl, op);
> +	if (status) {
> +		/* io wasn't active to abort consider it done */
> +		/* assume completion path already completing in parallel */
> +		return;
> +	}
> +}

This looks odd - the point of the blk_mq_tagset_busy_iter loops
is return the in-core request to the caller.  The controller shutdown
following this would take care of invalidating anything on the wire.

I have to admit that I'm still a bit confused how the FC abort
fits into the whole NVMe architecture model.

  reply	other threads:[~2016-10-25 16:47 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-10-23 19:31 [PATCH v3 5/7] nvme-fabrics: Add host support for FC transport James Smart
2016-10-25 16:47 ` Christoph Hellwig [this message]
2016-10-27 22:28   ` James Smart
2016-10-28  8:27     ` Christoph Hellwig
2016-10-25 23:47 ` J Freyensee
2016-10-27 22:46   ` James Smart
     [not found] <0EBCD721-6DFC-4095-A4FC-B32F3577A985@cavium.com>
2016-11-03  0:04 ` Trapp, Darren
2016-11-04 15:06   ` James Smart

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20161025164706.GE2446@infradead.org \
    --to=hch@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.