linux-nvme.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* Re: [PATCHv3 3/5] nvme: implement I/O Command Sets Command Set support
       [not found]     ` <20200623112551.GB117742@localhost.localdomain>
@ 2020-06-23 14:59       ` Keith Busch
  2020-06-23 22:10       ` Keith Busch
  2020-06-23 23:20       ` Sagi Grimberg
  2 siblings, 0 replies; 20+ messages in thread
From: Keith Busch @ 2020-06-23 14:59 UTC (permalink / raw)
  To: Niklas Cassel
  Cc: axboe, Sagi Grimberg, Martin K . Petersen, Johannes Thumshirn,
	Daniel Wagner, linux-nvme, linux-block, Javier González,
	hch, Matias Bjorling

On Tue, Jun 23, 2020 at 11:25:53AM +0000, Niklas Cassel wrote:
> On Tue, Jun 23, 2020 at 01:53:47AM -0700, Sagi Grimberg wrote:
> > >   static int nvme_setup_streams_ns(struct nvme_ctrl *ctrl, struct nvme_ns *ns,
> > > @@ -1930,6 +1950,15 @@ static int __nvme_revalidate_disk(struct gendisk *disk, struct nvme_id_ns *id)
> > >   	if (ns->lba_shift == 0)
> > >   		ns->lba_shift = 9;
> > > +	switch (ns->head->ids.csi) {
> > > +	case NVME_CSI_NVM:
> > > +		break;
> > > +	default:
> > > +		dev_warn(ctrl->device, "unknown csi:%d ns:%d\n",
> > > +			ns->head->ids.csi, ns->head->ns_id);
> > > +		return -ENODEV;
> > > +	}
> > 
> > Not sure we need a switch-case statement for a single case target...
> 
> I would consider it two cases. A supported CSI or a non-supported CSI
> (which means any CSI value != NVME_CSI_NVM).
> 
> However, a follow up patch (patch 5/5 in this series) adds another case
> to this switch-case statement (NVME_CSI_ZNS).
> 
> I guess this patch could have used an if-else statement, and patch 5/5
> replaced the if-statement with a switch-case.
> However, since a patch in the same series actually adds another case,
> I think that it is more clear this way.
> (A switch-case with only two cases added, in a patch that is not the last
> one in the series, suggests (at least to me), that it will most likely be
> extended in a following patch.)

Yeah, this patch is laying the foundation for future command sets.

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCHv3 4/5] nvme: support for multi-command set effects
       [not found] ` <20200622162530.1287650-5-kbusch@kernel.org>
@ 2020-06-23 17:43   ` Sagi Grimberg
  0 siblings, 0 replies; 20+ messages in thread
From: Sagi Grimberg @ 2020-06-23 17:43 UTC (permalink / raw)
  To: Keith Busch, linux-nvme, hch, linux-block, axboe
  Cc: Keith Busch, Martin K . Petersen, Daniel Wagner,
	Johannes Thumshirn, Javier González, Matias Bjørling

Looks good,

Reviewed-by: Sagi Grimberg <sagi@grimberg.me>

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCHv3 5/5] nvme: support for zoned namespaces
       [not found] ` <20200622162530.1287650-6-kbusch@kernel.org>
@ 2020-06-23 17:45   ` Sagi Grimberg
  2020-06-24  9:11   ` Javier González
  2020-06-29 13:53   ` Johannes Thumshirn
  2 siblings, 0 replies; 20+ messages in thread
From: Sagi Grimberg @ 2020-06-23 17:45 UTC (permalink / raw)
  To: Keith Busch, linux-nvme, hch, linux-block, axboe
  Cc: Niklas Cassel, Damien Le Moal, Ajay Joshi, Keith Busch,
	Martin K . Petersen, Dmitry Fomichev, Aravind Ramesh,
	Hans Holmberg, Matias Bjørling

Looks good Keith,

Reviewed-by: Sagi Grimberg <sagi@grimberg.me>

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCHv3 3/5] nvme: implement I/O Command Sets Command Set support
       [not found]     ` <20200623112551.GB117742@localhost.localdomain>
  2020-06-23 14:59       ` [PATCHv3 3/5] nvme: implement I/O Command Sets Command Set support Keith Busch
@ 2020-06-23 22:10       ` Keith Busch
  2020-06-23 23:17         ` Sagi Grimberg
  2020-06-23 23:20       ` Sagi Grimberg
  2 siblings, 1 reply; 20+ messages in thread
From: Keith Busch @ 2020-06-23 22:10 UTC (permalink / raw)
  To: Niklas Cassel
  Cc: axboe, Sagi Grimberg, Martin K . Petersen, Johannes Thumshirn,
	Daniel Wagner, linux-nvme, linux-block, Javier González,
	hch, Matias Bjorling

On Tue, Jun 23, 2020 at 11:25:53AM +0000, Niklas Cassel wrote:
> On Tue, Jun 23, 2020 at 01:53:47AM -0700, Sagi Grimberg wrote:
> > On 6/22/20 9:25 AM, Keith Busch wrote:
> > > -		len = nvme_process_ns_desc(ctrl, ids, cur);
> > > +		len = nvme_process_ns_desc(ctrl, ids, cur, &csi_seen);
> > >   		if (len < 0)
> > >   			goto free_data;
> > >   		len += sizeof(*cur);
> > >   	}
> > >   free_data:
> > > +	if (!status && nvme_multi_css(ctrl) && !csi_seen) {
> > 
> > We will clear the status if we detect a path error, that is to
> > avoid needlessly removing the ns for path failures, so you should
> > check at the goto site.
> 
> The problem is that this check has to be done after checking all the ns descs,
> so this check to be done as the final thing, at least after processing all the
> ns descs. No matter if nvme_process_ns_desc() returned an error, or if
> simply NVME_NIDT_CSI wasn't part of the ns desc list, so the loop reached the
> end without error.
> 
> Even if the nvme command failed and the status was cleared:
> 
>                 if (status > 0 && !(status & NVME_SC_DNR))
>                         status = 0;

This check is so weird. What has DNR got to do with whether or not we
want to continue with this namespace? The commit that adds this says
it's to check for a host failed IO, but a controller can just as validly
set DNR in its error status, in which case we'd still want clear the
status.

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCHv3 3/5] nvme: implement I/O Command Sets Command Set support
  2020-06-23 22:10       ` Keith Busch
@ 2020-06-23 23:17         ` Sagi Grimberg
  2020-06-24 17:25           ` Keith Busch
  0 siblings, 1 reply; 20+ messages in thread
From: Sagi Grimberg @ 2020-06-23 23:17 UTC (permalink / raw)
  To: Keith Busch, Niklas Cassel
  Cc: axboe, Daniel Wagner, Martin K . Petersen, Johannes Thumshirn,
	linux-nvme, linux-block, Javier González, hch,
	Matias Bjorling


>>>> -		len = nvme_process_ns_desc(ctrl, ids, cur);
>>>> +		len = nvme_process_ns_desc(ctrl, ids, cur, &csi_seen);
>>>>    		if (len < 0)
>>>>    			goto free_data;
>>>>    		len += sizeof(*cur);
>>>>    	}
>>>>    free_data:
>>>> +	if (!status && nvme_multi_css(ctrl) && !csi_seen) {
>>>
>>> We will clear the status if we detect a path error, that is to
>>> avoid needlessly removing the ns for path failures, so you should
>>> check at the goto site.
>>
>> The problem is that this check has to be done after checking all the ns descs,
>> so this check to be done as the final thing, at least after processing all the
>> ns descs. No matter if nvme_process_ns_desc() returned an error, or if
>> simply NVME_NIDT_CSI wasn't part of the ns desc list, so the loop reached the
>> end without error.
>>
>> Even if the nvme command failed and the status was cleared:
>>
>>                  if (status > 0 && !(status & NVME_SC_DNR))
>>                          status = 0;
> 
> This check is so weird. What has DNR got to do with whether or not we
> want to continue with this namespace? The commit that adds this says
> it's to check for a host failed IO, but a controller can just as validly
> set DNR in its error status, in which case we'd still want clear the
> status.

The reason is that if this error is not a DNR error, it means that we
should retry and succeed, we don't want to remove the namespace.

The problem that this solves is the fact that we have various error
recovery conditions (interconnect issues, failures, resets) and the
async works are designed to continue to run, issue I/O and fail. The
scan work, will revalidate namespaces and remove them if it fails to
do so, which is inherently wrong if these are simply inaccessible by
the host at this time.

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCHv3 3/5] nvme: implement I/O Command Sets Command Set support
       [not found]     ` <20200623112551.GB117742@localhost.localdomain>
  2020-06-23 14:59       ` [PATCHv3 3/5] nvme: implement I/O Command Sets Command Set support Keith Busch
  2020-06-23 22:10       ` Keith Busch
@ 2020-06-23 23:20       ` Sagi Grimberg
  2 siblings, 0 replies; 20+ messages in thread
From: Sagi Grimberg @ 2020-06-23 23:20 UTC (permalink / raw)
  To: Niklas Cassel
  Cc: axboe, Daniel Wagner, Martin K . Petersen, Johannes Thumshirn,
	linux-nvme, linux-block, Keith Busch, Javier González, hch,
	Matias Bjorling



On 6/23/20 4:25 AM, Niklas Cassel wrote:
> On Tue, Jun 23, 2020 at 01:53:47AM -0700, Sagi Grimberg wrote:
>>
>>
>> On 6/22/20 9:25 AM, Keith Busch wrote:
>>> From: Niklas Cassel <niklas.cassel@wdc.com>
>>>
>>> Implements support for the I/O Command Sets command set. The command set
>>> introduces a method to enumerate multiple command sets per namespace. If
>>> the command set is exposed, this method for enumeration will be used
>>> instead of the traditional method that uses the CC.CSS register command
>>> set register for command set identification.
>>>
>>> For namespaces where the Command Set Identifier is not supported or
>>> recognized, the specific namespace will not be created.
>>>
>>> Reviewed-by: Javier González <javier.gonz@samsung.com>
>>> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
>>> Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
>>> Reviewed-by: Matias Bjørling <matias.bjorling@wdc.com>
>>> Reviewed-by: Daniel Wagner <dwagner@suse.de>
>>> Signed-off-by: Niklas Cassel <niklas.cassel@wdc.com>
>>> ---
>>>    drivers/nvme/host/core.c | 48 +++++++++++++++++++++++++++++++++-------
>>>    drivers/nvme/host/nvme.h |  1 +
>>>    include/linux/nvme.h     | 19 ++++++++++++++--
>>>    3 files changed, 58 insertions(+), 10 deletions(-)
>>>
>>> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
>>> index 9491dbcfe81a..45a3cb5a35bd 100644
>>> --- a/drivers/nvme/host/core.c
>>> +++ b/drivers/nvme/host/core.c
>>> @@ -1056,8 +1056,13 @@ static int nvme_identify_ctrl(struct nvme_ctrl *dev, struct nvme_id_ctrl **id)
>>>    	return error;
>>>    }
>>> +static bool nvme_multi_css(struct nvme_ctrl *ctrl)
>>> +{
>>> +	return (ctrl->ctrl_config & NVME_CC_CSS_MASK) == NVME_CC_CSS_CSI;
>>> +}
>>> +
>>>    static int nvme_process_ns_desc(struct nvme_ctrl *ctrl, struct nvme_ns_ids *ids,
>>> -		struct nvme_ns_id_desc *cur)
>>> +		struct nvme_ns_id_desc *cur, bool *csi_seen)
>>>    {
>>>    	const char *warn_str = "ctrl returned bogus length:";
>>>    	void *data = cur;
>>> @@ -1087,6 +1092,15 @@ static int nvme_process_ns_desc(struct nvme_ctrl *ctrl, struct nvme_ns_ids *ids,
>>>    		}
>>>    		uuid_copy(&ids->uuid, data + sizeof(*cur));
>>>    		return NVME_NIDT_UUID_LEN;
>>> +	case NVME_NIDT_CSI:
>>> +		if (cur->nidl != NVME_NIDT_CSI_LEN) {
>>> +			dev_warn(ctrl->device, "%s %d for NVME_NIDT_CSI\n",
>>> +				 warn_str, cur->nidl);
>>> +			return -1;
>>> +		}
>>> +		memcpy(&ids->csi, data + sizeof(*cur), NVME_NIDT_CSI_LEN);
>>> +		*csi_seen = true;
>>> +		return NVME_NIDT_CSI_LEN;
>>>    	default:
>>>    		/* Skip unknown types */
>>>    		return cur->nidl;
>>> @@ -1097,10 +1111,9 @@ static int nvme_identify_ns_descs(struct nvme_ctrl *ctrl, unsigned nsid,
>>>    		struct nvme_ns_ids *ids)
>>>    {
>>>    	struct nvme_command c = { };
>>> -	int status;
>>> +	bool csi_seen = false;
>>> +	int status, pos, len;
>>>    	void *data;
>>> -	int pos;
>>> -	int len;
>>>    	c.identify.opcode = nvme_admin_identify;
>>>    	c.identify.nsid = cpu_to_le32(nsid);
>>> @@ -1130,13 +1143,19 @@ static int nvme_identify_ns_descs(struct nvme_ctrl *ctrl, unsigned nsid,
>>>    		if (cur->nidl == 0)
>>>    			break;
>>> -		len = nvme_process_ns_desc(ctrl, ids, cur);
>>> +		len = nvme_process_ns_desc(ctrl, ids, cur, &csi_seen);
>>>    		if (len < 0)
>>>    			goto free_data;
>>>    		len += sizeof(*cur);
>>>    	}
>>>    free_data:
>>> +	if (!status && nvme_multi_css(ctrl) && !csi_seen) {
>>
>> We will clear the status if we detect a path error, that is to
>> avoid needlessly removing the ns for path failures, so you should
>> check at the goto site.
> 
> The problem is that this check has to be done after checking all the ns descs,
> so this check to be done as the final thing, at least after processing all the
> ns descs. No matter if nvme_process_ns_desc() returned an error, or if
> simply NVME_NIDT_CSI wasn't part of the ns desc list, so the loop reached the
> end without error.
> 
> Even if the nvme command failed and the status was cleared:
> 
>                  if (status > 0 && !(status & NVME_SC_DNR))
>                          status = 0;
> 
> we still need to return an error, if (nvme_multi_css(ctrl) && !csi_seen).
> (Not reporting a CSI when nvme_multi_css() is enabled, is fatal.)
> 
> That is why the code looks like it does.
> 
> I guess we could do something like this, which does the same thing,
> but perhaps is a bit clearer:
> 
> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> index e95f0c498a6b..bef687b9a277 100644
> --- a/drivers/nvme/host/core.c
> +++ b/drivers/nvme/host/core.c
> @@ -1160,8 +1160,10 @@ static int nvme_identify_ns_descs(struct nvme_ctrl *ctrl, unsigned nsid,
>                    * Don't treat an error as fatal, as we potentially already
>                    * have a NGUID or EUI-64.
>                    */
> -               if (status > 0 && !(status & NVME_SC_DNR))
> +               if (status > 0 && !(status & NVME_SC_DNR)) {
>                          status = 0;
> +                       goto csi_check;
> +               }

I think its the opposite. If we failed with DNR, you should assume
that either the controller wants the host to retry, or its a
path/transport error. So we need to leave this namespace alone and
assume that when the host is connected back to the controller, the
scan_work will revalidate again.

So you should fail the csi_check only if you see a DNR error status.

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCHv3 5/5] nvme: support for zoned namespaces
       [not found] ` <20200622162530.1287650-6-kbusch@kernel.org>
  2020-06-23 17:45   ` [PATCHv3 5/5] nvme: support for zoned namespaces Sagi Grimberg
@ 2020-06-24  9:11   ` Javier González
  2020-06-29 13:53   ` Johannes Thumshirn
  2 siblings, 0 replies; 20+ messages in thread
From: Javier González @ 2020-06-24  9:11 UTC (permalink / raw)
  To: Keith Busch
  Cc: axboe, Niklas Cassel, Damien Le Moal, Ajay Joshi, sagi,
	Keith Busch, Dmitry Fomichev, Aravind Ramesh, linux-nvme,
	linux-block, Hans Holmberg, Martin K . Petersen, hch,
	Matias Bjørling

On 22.06.2020 09:25, Keith Busch wrote:
>From: Keith Busch <keith.busch@wdc.com>
>
>Add support for NVM Express Zoned Namespaces (ZNS) Command Set defined
>in NVM Express TP4053. Zoned namespaces are discovered based on their
>Command Set Identifier reported in the namespaces Namespace
>Identification Descriptor list. A successfully discovered Zoned
>Namespace will be registered with the block layer as a host managed
>zoned block device with Zone Append command support. A namespace that
>does not support append is not supported by the driver.
>
>Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
>Signed-off-by: Hans Holmberg <hans.holmberg@wdc.com>
>Signed-off-by: Dmitry Fomichev <dmitry.fomichev@wdc.com>
>Signed-off-by: Ajay Joshi <ajay.joshi@wdc.com>
>Signed-off-by: Aravind Ramesh <aravind.ramesh@wdc.com>
>Signed-off-by: Niklas Cassel <niklas.cassel@wdc.com>
>Signed-off-by: Matias Bjørling <matias.bjorling@wdc.com>
>Signed-off-by: Damien Le Moal <damien.lemoal@wdc.com>
>Signed-off-by: Keith Busch <keith.busch@wdc.com>
>---
> block/Kconfig              |   5 +-
> drivers/nvme/host/Makefile |   1 +
> drivers/nvme/host/core.c   |  91 ++++++++++++--
> drivers/nvme/host/nvme.h   |  39 ++++++
> drivers/nvme/host/zns.c    | 245 +++++++++++++++++++++++++++++++++++++
> include/linux/nvme.h       | 114 ++++++++++++++++-
> 6 files changed, 480 insertions(+), 15 deletions(-)
> create mode 100644 drivers/nvme/host/zns.c
>
>diff --git a/block/Kconfig b/block/Kconfig
>index 9357d7302398..bbad5e8bbffe 100644
>--- a/block/Kconfig
>+++ b/block/Kconfig
>@@ -86,9 +86,10 @@ config BLK_DEV_ZONED
> 	select MQ_IOSCHED_DEADLINE
> 	help
> 	Block layer zoned block device support. This option enables
>-	support for ZAC/ZBC host-managed and host-aware zoned block devices.
>+	support for ZAC/ZBC/ZNS host-managed and host-aware zoned block
>+	devices.
>
>-	Say yes here if you have a ZAC or ZBC storage device.
>+	Say yes here if you have a ZAC, ZBC, or ZNS storage device.
>
> config BLK_DEV_THROTTLING
> 	bool "Block layer bio throttling support"
>diff --git a/drivers/nvme/host/Makefile b/drivers/nvme/host/Makefile
>index fc7b26be692d..d7f6a87687b8 100644
>--- a/drivers/nvme/host/Makefile
>+++ b/drivers/nvme/host/Makefile
>@@ -13,6 +13,7 @@ nvme-core-y				:= core.o
> nvme-core-$(CONFIG_TRACING)		+= trace.o
> nvme-core-$(CONFIG_NVME_MULTIPATH)	+= multipath.o
> nvme-core-$(CONFIG_NVM)			+= lightnvm.o
>+nvme-core-$(CONFIG_BLK_DEV_ZONED)	+= zns.o
> nvme-core-$(CONFIG_FAULT_INJECTION_DEBUG_FS)	+= fault_inject.o
> nvme-core-$(CONFIG_NVME_HWMON)		+= hwmon.o
>
>diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
>index 64b7b9fc2817..885db561de17 100644
>--- a/drivers/nvme/host/core.c
>+++ b/drivers/nvme/host/core.c
>@@ -89,7 +89,7 @@ static dev_t nvme_chr_devt;
> static struct class *nvme_class;
> static struct class *nvme_subsys_class;
>
>-static int nvme_revalidate_disk(struct gendisk *disk);
>+static int _nvme_revalidate_disk(struct gendisk *disk);
> static void nvme_put_subsystem(struct nvme_subsystem *subsys);
> static void nvme_remove_invalid_namespaces(struct nvme_ctrl *ctrl,
> 					   unsigned nsid);
>@@ -287,6 +287,10 @@ void nvme_complete_rq(struct request *req)
> 			nvme_retry_req(req);
> 			return;
> 		}
>+	} else if (IS_ENABLED(CONFIG_BLK_DEV_ZONED) &&
>+		   req_op(req) == REQ_OP_ZONE_APPEND) {
>+		req->__sector = nvme_lba_to_sect(req->q->queuedata,
>+			le64_to_cpu(nvme_req(req)->result.u64));
> 	}
>
> 	nvme_trace_bio_complete(req, status);
>@@ -673,7 +677,8 @@ static inline blk_status_t nvme_setup_write_zeroes(struct nvme_ns *ns,
> }
>
> static inline blk_status_t nvme_setup_rw(struct nvme_ns *ns,
>-		struct request *req, struct nvme_command *cmnd)
>+		struct request *req, struct nvme_command *cmnd,
>+		enum nvme_opcode op)
> {
> 	struct nvme_ctrl *ctrl = ns->ctrl;
> 	u16 control = 0;
>@@ -687,7 +692,7 @@ static inline blk_status_t nvme_setup_rw(struct nvme_ns *ns,
> 	if (req->cmd_flags & REQ_RAHEAD)
> 		dsmgmt |= NVME_RW_DSM_FREQ_PREFETCH;
>
>-	cmnd->rw.opcode = (rq_data_dir(req) ? nvme_cmd_write : nvme_cmd_read);
>+	cmnd->rw.opcode = op;
> 	cmnd->rw.nsid = cpu_to_le32(ns->head->ns_id);
> 	cmnd->rw.slba = cpu_to_le64(nvme_sect_to_lba(ns, blk_rq_pos(req)));
> 	cmnd->rw.length = cpu_to_le16((blk_rq_bytes(req) >> ns->lba_shift) - 1);
>@@ -716,6 +721,8 @@ static inline blk_status_t nvme_setup_rw(struct nvme_ns *ns,
> 		case NVME_NS_DPS_PI_TYPE2:
> 			control |= NVME_RW_PRINFO_PRCHK_GUARD |
> 					NVME_RW_PRINFO_PRCHK_REF;
>+			if (op == nvme_cmd_zone_append)
>+				control |= NVME_RW_APPEND_PIREMAP;
> 			cmnd->rw.reftag = cpu_to_le32(t10_pi_ref_tag(req));
> 			break;
> 		}
>@@ -756,6 +763,19 @@ blk_status_t nvme_setup_cmd(struct nvme_ns *ns, struct request *req,
> 	case REQ_OP_FLUSH:
> 		nvme_setup_flush(ns, cmd);
> 		break;
>+	case REQ_OP_ZONE_RESET_ALL:
>+	case REQ_OP_ZONE_RESET:
>+		ret = nvme_setup_zone_mgmt_send(ns, req, cmd, NVME_ZONE_RESET);
>+		break;
>+	case REQ_OP_ZONE_OPEN:
>+		ret = nvme_setup_zone_mgmt_send(ns, req, cmd, NVME_ZONE_OPEN);
>+		break;
>+	case REQ_OP_ZONE_CLOSE:
>+		ret = nvme_setup_zone_mgmt_send(ns, req, cmd, NVME_ZONE_CLOSE);
>+		break;
>+	case REQ_OP_ZONE_FINISH:
>+		ret = nvme_setup_zone_mgmt_send(ns, req, cmd, NVME_ZONE_FINISH);
>+		break;
> 	case REQ_OP_WRITE_ZEROES:
> 		ret = nvme_setup_write_zeroes(ns, req, cmd);
> 		break;
>@@ -763,8 +783,13 @@ blk_status_t nvme_setup_cmd(struct nvme_ns *ns, struct request *req,
> 		ret = nvme_setup_discard(ns, req, cmd);
> 		break;
> 	case REQ_OP_READ:
>+		ret = nvme_setup_rw(ns, req, cmd, nvme_cmd_read);
>+		break;
> 	case REQ_OP_WRITE:
>-		ret = nvme_setup_rw(ns, req, cmd);
>+		ret = nvme_setup_rw(ns, req, cmd, nvme_cmd_write);
>+		break;
>+	case REQ_OP_ZONE_APPEND:
>+		ret = nvme_setup_rw(ns, req, cmd, nvme_cmd_zone_append);
> 		break;
> 	default:
> 		WARN_ON_ONCE(1);
>@@ -1391,14 +1416,23 @@ static u32 nvme_passthru_start(struct nvme_ctrl *ctrl, struct nvme_ns *ns,
> 	return effects;
> }
>
>-static void nvme_update_formats(struct nvme_ctrl *ctrl)
>+static void nvme_update_formats(struct nvme_ctrl *ctrl, u32 *effects)
> {
> 	struct nvme_ns *ns;
>
> 	down_read(&ctrl->namespaces_rwsem);
> 	list_for_each_entry(ns, &ctrl->namespaces, list)
>-		if (ns->disk && nvme_revalidate_disk(ns->disk))
>+		if (ns->disk && _nvme_revalidate_disk(ns->disk))
> 			nvme_set_queue_dying(ns);
>+		else if (blk_queue_is_zoned(ns->disk->queue)) {
>+			/*
>+			 * IO commands are required to fully revalidate a zoned
>+			 * device. Force the command effects to trigger rescan
>+			 * work so report zones can run in a context with
>+			 * unfrozen IO queues.
>+			 */
>+			*effects |= NVME_CMD_EFFECTS_NCC;
>+		}
> 	up_read(&ctrl->namespaces_rwsem);
> }
>
>@@ -1410,7 +1444,7 @@ static void nvme_passthru_end(struct nvme_ctrl *ctrl, u32 effects)
> 	 * this command.
> 	 */
> 	if (effects & NVME_CMD_EFFECTS_LBCC)
>-		nvme_update_formats(ctrl);
>+		nvme_update_formats(ctrl, &effects);
> 	if (effects & (NVME_CMD_EFFECTS_LBCC | NVME_CMD_EFFECTS_CSE_MASK)) {
> 		nvme_unfreeze(ctrl);
> 		nvme_mpath_unfreeze(ctrl->subsys);
>@@ -1525,7 +1559,7 @@ static int nvme_user_cmd64(struct nvme_ctrl *ctrl, struct nvme_ns *ns,
>  * Issue ioctl requests on the first available path.  Note that unlike normal
>  * block layer requests we will not retry failed request on another controller.
>  */
>-static struct nvme_ns *nvme_get_ns_from_disk(struct gendisk *disk,
>+struct nvme_ns *nvme_get_ns_from_disk(struct gendisk *disk,
> 		struct nvme_ns_head **head, int *srcu_idx)
> {
> #ifdef CONFIG_NVME_MULTIPATH
>@@ -1545,7 +1579,7 @@ static struct nvme_ns *nvme_get_ns_from_disk(struct gendisk *disk,
> 	return disk->private_data;
> }
>
>-static void nvme_put_ns_from_disk(struct nvme_ns_head *head, int idx)
>+void nvme_put_ns_from_disk(struct nvme_ns_head *head, int idx)
> {
> 	if (head)
> 		srcu_read_unlock(&head->srcu, idx);
>@@ -1938,21 +1972,28 @@ static void nvme_update_disk_info(struct gendisk *disk,
>
> static int __nvme_revalidate_disk(struct gendisk *disk, struct nvme_id_ns *id)
> {
>+	unsigned lbaf = id->flbas & NVME_NS_FLBAS_LBA_MASK;
> 	struct nvme_ns *ns = disk->private_data;
> 	struct nvme_ctrl *ctrl = ns->ctrl;
>+	int ret;
> 	u32 iob;
>
> 	/*
> 	 * If identify namespace failed, use default 512 byte block size so
> 	 * block layer can use before failing read/write for 0 capacity.
> 	 */
>-	ns->lba_shift = id->lbaf[id->flbas & NVME_NS_FLBAS_LBA_MASK].ds;
>+	ns->lba_shift = id->lbaf[lbaf].ds;
> 	if (ns->lba_shift == 0)
> 		ns->lba_shift = 9;
>
> 	switch (ns->head->ids.csi) {
> 	case NVME_CSI_NVM:
> 		break;
>+	case NVME_CSI_ZNS:
>+		ret = nvme_update_zone_info(disk, ns, lbaf);
>+		if (ret)
>+			return ret;
>+		break;
> 	default:
> 		dev_warn(ctrl->device, "unknown csi:%d ns:%d\n",
> 			ns->head->ids.csi, ns->head->ns_id);
>@@ -1966,7 +2007,7 @@ static int __nvme_revalidate_disk(struct gendisk *disk, struct nvme_id_ns *id)
> 		iob = nvme_lba_to_sect(ns, le16_to_cpu(id->noiob));
>
> 	ns->features = 0;
>-	ns->ms = le16_to_cpu(id->lbaf[id->flbas & NVME_NS_FLBAS_LBA_MASK].ms);
>+	ns->ms = le16_to_cpu(id->lbaf[lbaf].ms);
> 	/* the PI implementation requires metadata equal t10 pi tuple size */
> 	if (ns->ms == sizeof(struct t10_pi_tuple))
> 		ns->pi_type = id->dps & NVME_NS_DPS_PI_MASK;
>@@ -2009,7 +2050,7 @@ static int __nvme_revalidate_disk(struct gendisk *disk, struct nvme_id_ns *id)
> 	return 0;
> }
>
>-static int nvme_revalidate_disk(struct gendisk *disk)
>+static int _nvme_revalidate_disk(struct gendisk *disk)
> {
> 	struct nvme_ns *ns = disk->private_data;
> 	struct nvme_ctrl *ctrl = ns->ctrl;
>@@ -2057,6 +2098,28 @@ static int nvme_revalidate_disk(struct gendisk *disk)
> 	return ret;
> }
>
>+static int nvme_revalidate_disk(struct gendisk *disk)
>+{
>+	int ret;
>+
>+	ret = _nvme_revalidate_disk(disk);
>+	if (ret)
>+		return ret;
>+
>+#ifdef CONFIG_BLK_DEV_ZONED
>+	if (blk_queue_is_zoned(disk->queue)) {
>+		struct nvme_ns *ns = disk->private_data;
>+		struct nvme_ctrl *ctrl = ns->ctrl;
>+
>+		ret = blk_revalidate_disk_zones(disk, NULL);
>+		if (!ret)
>+			blk_queue_max_zone_append_sectors(disk->queue,
>+							  ctrl->max_zone_append);
>+	}
>+#endif
>+	return ret;
>+}
>+
> static char nvme_pr_type(enum pr_type type)
> {
> 	switch (type) {
>@@ -2187,6 +2250,7 @@ static const struct block_device_operations nvme_fops = {
> 	.release	= nvme_release,
> 	.getgeo		= nvme_getgeo,
> 	.revalidate_disk= nvme_revalidate_disk,
>+	.report_zones	= nvme_report_zones,
> 	.pr_ops		= &nvme_pr_ops,
> };
>
>@@ -2212,6 +2276,7 @@ const struct block_device_operations nvme_ns_head_ops = {
> 	.ioctl		= nvme_ioctl,
> 	.compat_ioctl	= nvme_compat_ioctl,
> 	.getgeo		= nvme_getgeo,
>+	.report_zones	= nvme_report_zones,
> 	.pr_ops		= &nvme_pr_ops,
> };
> #endif /* CONFIG_NVME_MULTIPATH */
>@@ -4439,6 +4504,8 @@ static inline void _nvme_check_size(void)
> 	BUILD_BUG_ON(sizeof(struct nvme_command) != 64);
> 	BUILD_BUG_ON(sizeof(struct nvme_id_ctrl) != NVME_IDENTIFY_DATA_SIZE);
> 	BUILD_BUG_ON(sizeof(struct nvme_id_ns) != NVME_IDENTIFY_DATA_SIZE);
>+	BUILD_BUG_ON(sizeof(struct nvme_id_ns_zns) != NVME_IDENTIFY_DATA_SIZE);
>+	BUILD_BUG_ON(sizeof(struct nvme_id_ctrl_zns) != NVME_IDENTIFY_DATA_SIZE);
> 	BUILD_BUG_ON(sizeof(struct nvme_lba_range_type) != 64);
> 	BUILD_BUG_ON(sizeof(struct nvme_smart_log) != 512);
> 	BUILD_BUG_ON(sizeof(struct nvme_dbbuf) != 64);
>diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
>index 4a1982133e9a..ecf443efdf91 100644
>--- a/drivers/nvme/host/nvme.h
>+++ b/drivers/nvme/host/nvme.h
>@@ -238,6 +238,9 @@ struct nvme_ctrl {
> 	u32 max_hw_sectors;
> 	u32 max_segments;
> 	u32 max_integrity_segments;
>+#ifdef CONFIG_BLK_DEV_ZONED
>+	u32 max_zone_append;
>+#endif
> 	u16 crdt[3];
> 	u16 oncs;
> 	u16 oacs;
>@@ -402,6 +405,9 @@ struct nvme_ns {
> 	u16 sgs;
> 	u32 sws;
> 	u8 pi_type;
>+#ifdef CONFIG_BLK_DEV_ZONED
>+	u64 zsze;
>+#endif
> 	unsigned long features;
> 	unsigned long flags;
> #define NVME_NS_REMOVING	0
>@@ -567,6 +573,9 @@ int nvme_delete_ctrl(struct nvme_ctrl *ctrl);
>
> int nvme_get_log(struct nvme_ctrl *ctrl, u32 nsid, u8 log_page, u8 lsp, u8 csi,
> 		void *log, size_t size, u64 offset);
>+struct nvme_ns *nvme_get_ns_from_disk(struct gendisk *disk,
>+		struct nvme_ns_head **head, int *srcu_idx);
>+void nvme_put_ns_from_disk(struct nvme_ns_head *head, int idx);
>
> extern const struct attribute_group *nvme_ns_id_attr_groups[];
> extern const struct block_device_operations nvme_ns_head_ops;
>@@ -688,6 +697,36 @@ static inline void nvme_mpath_start_freeze(struct nvme_subsystem *subsys)
> }
> #endif /* CONFIG_NVME_MULTIPATH */
>
>+#ifdef CONFIG_BLK_DEV_ZONED
>+int nvme_update_zone_info(struct gendisk *disk, struct nvme_ns *ns,
>+			  unsigned lbaf);
>+
>+int nvme_report_zones(struct gendisk *disk, sector_t sector,
>+		      unsigned int nr_zones, report_zones_cb cb, void *data);
>+
>+blk_status_t nvme_setup_zone_mgmt_send(struct nvme_ns *ns, struct request *req,
>+				       struct nvme_command *cmnd,
>+				       enum nvme_zone_mgmt_action action);
>+#else
>+#define nvme_report_zones NULL
>+
>+static inline blk_status_t nvme_setup_zone_mgmt_send(struct nvme_ns *ns,
>+		struct request *req, struct nvme_command *cmnd,
>+		enum nvme_zone_mgmt_action action)
>+{
>+	return BLK_STS_NOTSUPP;
>+}
>+
>+static inline int nvme_update_zone_info(struct gendisk *disk,
>+					struct nvme_ns *ns,
>+					unsigned lbaf)
>+{
>+	dev_warn(ns->ctrl->device,
>+		 "Please enable CONFIG_BLK_DEV_ZONED to support ZNS devices\n");
>+	return -EPROTONOSUPPORT;
>+}
>+#endif
>+
> #ifdef CONFIG_NVM
> int nvme_nvm_register(struct nvme_ns *ns, char *disk_name, int node);
> void nvme_nvm_unregister(struct nvme_ns *ns);
>diff --git a/drivers/nvme/host/zns.c b/drivers/nvme/host/zns.c
>new file mode 100644
>index 000000000000..ee6f49a8aee4
>--- /dev/null
>+++ b/drivers/nvme/host/zns.c
>@@ -0,0 +1,245 @@
>+// SPDX-License-Identifier: GPL-2.0
>+/*
>+ * Copyright (C) 2020 Western Digital Corporation or its affiliates.
>+ */
>+
>+#include <linux/blkdev.h>
>+#include <linux/vmalloc.h>
>+#include "nvme.h"
>+
>+static int nvme_set_max_append(struct nvme_ctrl *ctrl)
>+{
>+	struct nvme_command c = { };
>+	struct nvme_id_ctrl_zns *id;
>+	int status;
>+
>+	id = kzalloc(sizeof(*id), GFP_KERNEL);
>+	if (!id)
>+		return -ENOMEM;
>+
>+	c.identify.opcode = nvme_admin_identify;
>+	c.identify.cns = NVME_ID_CNS_CS_CTRL;
>+	c.identify.csi = NVME_CSI_ZNS;
>+
>+	status = nvme_submit_sync_cmd(ctrl->admin_q, &c, id, sizeof(*id));
>+	if (status) {
>+		kfree(id);
>+		return status;
>+	}
>+
>+	ctrl->max_zone_append = 1 << (id->zamds + 3);
>+	kfree(id);
>+	return 0;
>+}
>+
>+int nvme_update_zone_info(struct gendisk *disk, struct nvme_ns *ns,
>+			  unsigned lbaf)
>+{
>+	struct nvme_effects_log *log = ns->head->effects;
>+	struct request_queue *q = disk->queue;
>+	struct nvme_command c = { };
>+	struct nvme_id_ns_zns *id;
>+	int status;
>+
>+	/* Driver requires zone append support */
>+	if (!(log->iocs[nvme_cmd_zone_append] & NVME_CMD_EFFECTS_CSUPP)) {
>+		dev_warn(ns->ctrl->device,
>+			"append not supported for zoned namespace:%d\n",
>+			ns->head->ns_id);
>+		return -ENODEV;
>+	}
>+
>+	/* Lazily query controller append limit for the first zoned namespace */
>+	if (!ns->ctrl->max_zone_append) {
>+		status = nvme_set_max_append(ns->ctrl);
>+		if (status)
>+			return status;
>+	}
>+
>+	id = kzalloc(sizeof(*id), GFP_KERNEL);
>+	if (!id)
>+		return -ENOMEM;
>+
>+	c.identify.opcode = nvme_admin_identify;
>+	c.identify.nsid = cpu_to_le32(ns->head->ns_id);
>+	c.identify.cns = NVME_ID_CNS_CS_NS;
>+	c.identify.csi = NVME_CSI_ZNS;
>+
>+	status = nvme_submit_sync_cmd(ns->ctrl->admin_q, &c, id, sizeof(*id));
>+	if (status)
>+		goto free_data;
>+
>+	/*
>+	 * We currently do not handle devices requiring any of the zoned
>+	 * operation characteristics.
>+	 */
>+	if (id->zoc) {
>+		dev_warn(ns->ctrl->device,
>+			"zone operations:%x not supported for namespace:%d\n",
>+			le_to_cpu16(id->zoc), ns->head->ns_id);
>+		status = -EINVAL;
>+		goto free_data;
>+	}
>+
>+	ns->zsze = nvme_lba_to_sect(ns, le64_to_cpu(id->lbafe[lbaf].zsze));
>+	if (!ns->zsze) {
>+		status = -EINVAL;
>+		goto free_data;
>+	}
>+
>+	q->limits.zoned = BLK_ZONED_HM;
>+	blk_queue_flag_set(QUEUE_FLAG_ZONE_RESETALL, q);
>+free_data:
>+	kfree(id);
>+	return status;
>+}
>+
>+static void *nvme_zns_alloc_report_buffer(struct nvme_ns *ns,
>+					  unsigned int nr_zones, size_t *buflen)
>+{
>+	struct request_queue *q = ns->disk->queue;
>+	size_t bufsize;
>+	void *buf;
>+
>+	const size_t min_bufsize = sizeof(struct nvme_zone_report) +
>+				   sizeof(struct nvme_zone_descriptor);
>+
>+	nr_zones = min_t(unsigned int, nr_zones,
>+			 get_capacity(ns->disk) >> ilog2(ns->zsze));
>+
>+	bufsize = sizeof(struct nvme_zone_report) +
>+		nr_zones * sizeof(struct nvme_zone_descriptor);
>+	bufsize = min_t(size_t, bufsize,
>+			queue_max_hw_sectors(q) << SECTOR_SHIFT);
>+	bufsize = min_t(size_t, bufsize, queue_max_segments(q) << PAGE_SHIFT);
>+
>+	while (bufsize >= min_bufsize) {
>+		buf = __vmalloc(bufsize,
>+				GFP_KERNEL | __GFP_ZERO | __GFP_NORETRY);
>+		if (buf) {
>+			*buflen = bufsize;
>+			return buf;
>+		}
>+		bufsize >>= 1;
>+	}
>+	return NULL;
>+}
>+
>+static int __nvme_ns_report_zones(struct nvme_ns *ns, sector_t sector,
>+				  struct nvme_zone_report *report,
>+				  size_t buflen)
>+{
>+	struct nvme_command c = { };
>+	int ret;
>+
>+	c.zmr.opcode = nvme_cmd_zone_mgmt_recv;
>+	c.zmr.nsid = cpu_to_le32(ns->head->ns_id);
>+	c.zmr.slba = cpu_to_le64(nvme_sect_to_lba(ns, sector));
>+	c.zmr.numd = cpu_to_le32(nvme_bytes_to_numd(buflen));
>+	c.zmr.zra = NVME_ZRA_ZONE_REPORT;
>+	c.zmr.zrasf = NVME_ZRASF_ZONE_REPORT_ALL;
>+	c.zmr.pr = NVME_REPORT_ZONE_PARTIAL;
>+
>+	ret = nvme_submit_sync_cmd(ns->queue, &c, report, buflen);
>+	if (ret)
>+		return ret;
>+
>+	return le64_to_cpu(report->nr_zones);
>+}
>+
>+static int nvme_zone_parse_entry(struct nvme_ns *ns,
>+				 struct nvme_zone_descriptor *entry,
>+				 unsigned int idx, report_zones_cb cb,
>+				 void *data)
>+{
>+	struct blk_zone zone = { };
>+
>+	if ((entry->zt & 0xf) != NVME_ZONE_TYPE_SEQWRITE_REQ) {
>+		dev_err(ns->ctrl->device, "invalid zone type %#x\n",
>+				entry->zt);
>+		return -EINVAL;
>+	}
>+
>+	zone.type = BLK_ZONE_TYPE_SEQWRITE_REQ;
>+	zone.cond = entry->zs >> 4;
>+	zone.len = ns->zsze;
>+	zone.capacity = nvme_lba_to_sect(ns, le64_to_cpu(entry->zcap));
>+	zone.start = nvme_lba_to_sect(ns, le64_to_cpu(entry->zslba));
>+	zone.wp = nvme_lba_to_sect(ns, le64_to_cpu(entry->wp));
>+
>+	return cb(&zone, idx, data);
>+}
>+
>+static int nvme_ns_report_zones(struct nvme_ns *ns, sector_t sector,
>+			unsigned int nr_zones, report_zones_cb cb, void *data)
>+{
>+	struct nvme_zone_report *report;
>+	int ret, zone_idx = 0;
>+	unsigned int nz, i;
>+	size_t buflen;
>+
>+	report = nvme_zns_alloc_report_buffer(ns, nr_zones, &buflen);
>+	if (!report)
>+		return -ENOMEM;
>+
>+	sector &= ~(ns->zsze - 1);
>+	while (zone_idx < nr_zones && sector < get_capacity(ns->disk)) {
>+		memset(report, 0, buflen);
>+		ret = __nvme_ns_report_zones(ns, sector, report, buflen);
>+		if (ret < 0)
>+			goto out_free;
>+
>+		nz = min_t(unsigned int, ret, nr_zones);
>+		if (!nz)
>+			break;
>+
>+		for (i = 0; i < nz && zone_idx < nr_zones; i++) {
>+			ret = nvme_zone_parse_entry(ns, &report->entries[i],
>+						    zone_idx, cb, data);
>+			if (ret)
>+				goto out_free;
>+			zone_idx++;
>+		}
>+
>+		sector += ns->zsze * nz;
>+	}
>+
>+	ret = zone_idx;
>+out_free:
>+	kvfree(report);
>+	return ret;
>+}
>+
>+int nvme_report_zones(struct gendisk *disk, sector_t sector,
>+		      unsigned int nr_zones, report_zones_cb cb, void *data)
>+{
>+	struct nvme_ns_head *head = NULL;
>+	struct nvme_ns *ns;
>+	int srcu_idx, ret;
>+
>+	ns = nvme_get_ns_from_disk(disk, &head, &srcu_idx);
>+	if (unlikely(!ns))
>+		return -EWOULDBLOCK;
>+
>+	if (ns->head->ids.csi == NVME_CSI_ZNS)
>+		ret = nvme_ns_report_zones(ns, sector, nr_zones, cb, data);
>+	else
>+		ret = -EINVAL;
>+	nvme_put_ns_from_disk(head, srcu_idx);
>+
>+	return ret;
>+}
>+
>+blk_status_t nvme_setup_zone_mgmt_send(struct nvme_ns *ns, struct request *req,
>+		struct nvme_command *c, enum nvme_zone_mgmt_action action)
>+{
>+	c->zms.opcode = nvme_cmd_zone_mgmt_send;
>+	c->zms.nsid = cpu_to_le32(ns->head->ns_id);
>+	c->zms.slba = cpu_to_le64(nvme_sect_to_lba(ns, blk_rq_pos(req)));
>+	c->zms.zsa = action;
>+
>+	if (req_op(req) == REQ_OP_ZONE_RESET_ALL)
>+		c->zms.select_all = 1;
>+
>+	return BLK_STS_OK;
>+}
>diff --git a/include/linux/nvme.h b/include/linux/nvme.h
>index 95cd03e240a1..d862e5d70818 100644
>--- a/include/linux/nvme.h
>+++ b/include/linux/nvme.h
>@@ -1,6 +1,7 @@
> /* SPDX-License-Identifier: GPL-2.0 */
> /*
>- * Definitions for the NVM Express interface
>+ * Definitions for the NVM Ex
>+ * ress interface
>  * Copyright (c) 2011-2014, Intel Corporation.
>  */
>
>@@ -374,6 +375,30 @@ struct nvme_id_ns {
> 	__u8			vs[3712];
> };
>
>+struct nvme_zns_lbafe {
>+	__le64			zsze;
>+	__u8			zdes;
>+	__u8			rsvd9[7];
>+};
>+
>+struct nvme_id_ns_zns {
>+	__le16			zoc;
>+	__le16			ozcs;
>+	__le32			mar;
>+	__le32			mor;
>+	__le32			rrl;
>+	__le32			frl;
>+	__u8			rsvd20[2796];
>+	struct nvme_zns_lbafe	lbafe[16];
>+	__u8			rsvd3072[768];
>+	__u8			vs[256];
>+};
>+
>+struct nvme_id_ctrl_zns {
>+	__u8	zamds;
>+	__u8	rsvd1[4095];
>+};
>+
> enum {
> 	NVME_ID_CNS_NS			= 0x00,
> 	NVME_ID_CNS_CTRL		= 0x01,
>@@ -392,6 +417,7 @@ enum {
>
> enum {
> 	NVME_CSI_NVM			= 0,
>+	NVME_CSI_ZNS			= 2,
> };
>
> enum {
>@@ -532,6 +558,27 @@ struct nvme_ana_rsp_hdr {
> 	__le16	rsvd10[3];
> };
>
>+struct nvme_zone_descriptor {
>+	__u8		zt;
>+	__u8		zs;
>+	__u8		za;
>+	__u8		rsvd3[5];
>+	__le64		zcap;
>+	__le64		zslba;
>+	__le64		wp;
>+	__u8		rsvd32[32];
>+};
>+
>+enum {
>+	NVME_ZONE_TYPE_SEQWRITE_REQ	= 0x2,
>+};
>+
>+struct nvme_zone_report {
>+	__le64		nr_zones;
>+	__u8		resv8[56];
>+	struct nvme_zone_descriptor entries[];
>+};
>+
> enum {
> 	NVME_SMART_CRIT_SPARE		= 1 << 0,
> 	NVME_SMART_CRIT_TEMPERATURE	= 1 << 1,
>@@ -626,6 +673,9 @@ enum nvme_opcode {
> 	nvme_cmd_resv_report	= 0x0e,
> 	nvme_cmd_resv_acquire	= 0x11,
> 	nvme_cmd_resv_release	= 0x15,
>+	nvme_cmd_zone_mgmt_send	= 0x79,
>+	nvme_cmd_zone_mgmt_recv	= 0x7a,
>+	nvme_cmd_zone_append	= 0x7d,
> };
>
> #define nvme_opcode_name(opcode)	{ opcode, #opcode }
>@@ -764,6 +814,7 @@ struct nvme_rw_command {
> enum {
> 	NVME_RW_LR			= 1 << 15,
> 	NVME_RW_FUA			= 1 << 14,
>+	NVME_RW_APPEND_PIREMAP		= 1 << 9,
> 	NVME_RW_DSM_FREQ_UNSPEC		= 0,
> 	NVME_RW_DSM_FREQ_TYPICAL	= 1,
> 	NVME_RW_DSM_FREQ_RARE		= 2,
>@@ -829,6 +880,53 @@ struct nvme_write_zeroes_cmd {
> 	__le16			appmask;
> };
>
>+enum nvme_zone_mgmt_action {
>+	NVME_ZONE_CLOSE		= 0x1,
>+	NVME_ZONE_FINISH	= 0x2,
>+	NVME_ZONE_OPEN		= 0x3,
>+	NVME_ZONE_RESET		= 0x4,
>+	NVME_ZONE_OFFLINE	= 0x5,
>+	NVME_ZONE_SET_DESC_EXT	= 0x10,
>+};
>+
>+struct nvme_zone_mgmt_send_cmd {
>+	__u8			opcode;
>+	__u8			flags;
>+	__u16			command_id;
>+	__le32			nsid;
>+	__le32			cdw2[2];
>+	__le64			metadata;
>+	union nvme_data_ptr	dptr;
>+	__le64			slba;
>+	__le32			cdw12;
>+	__u8			zsa;
>+	__u8			select_all;
>+	__u8			rsvd13[2];
>+	__le32			cdw14[2];
>+};
>+
>+struct nvme_zone_mgmt_recv_cmd {
>+	__u8			opcode;
>+	__u8			flags;
>+	__u16			command_id;
>+	__le32			nsid;
>+	__le64			rsvd2[2];
>+	union nvme_data_ptr	dptr;
>+	__le64			slba;
>+	__le32			numd;
>+	__u8			zra;
>+	__u8			zrasf;
>+	__u8			pr;
>+	__u8			rsvd13;
>+	__le32			cdw14[2];
>+};
>+
>+enum {
>+	NVME_ZRA_ZONE_REPORT		= 0,
>+	NVME_ZRASF_ZONE_REPORT_ALL	= 0,
>+	NVME_REPORT_ZONE_PARTIAL	= 1,
>+};
>+
> /* Features */
>
> enum {
>@@ -1300,6 +1398,8 @@ struct nvme_command {
> 		struct nvme_format_cmd format;
> 		struct nvme_dsm_cmd dsm;
> 		struct nvme_write_zeroes_cmd write_zeroes;
>+		struct nvme_zone_mgmt_send_cmd zms;
>+		struct nvme_zone_mgmt_recv_cmd zmr;
> 		struct nvme_abort_cmd abort;
> 		struct nvme_get_log_page_command get_log_page;
> 		struct nvmf_common_command fabrics;
>@@ -1433,6 +1533,18 @@ enum {
> 	NVME_SC_DISCOVERY_RESTART	= 0x190,
> 	NVME_SC_AUTH_REQUIRED		= 0x191,
>
>+	/*
>+	 * I/O Command Set Specific - Zoned commands:
>+	 */
>+	NVME_SC_ZONE_BOUNDARY_ERROR	= 0x1b8,
>+	NVME_SC_ZONE_FULL		= 0x1b9,
>+	NVME_SC_ZONE_READ_ONLY		= 0x1ba,
>+	NVME_SC_ZONE_OFFLINE		= 0x1bb,
>+	NVME_SC_ZONE_INVALID_WRITE	= 0x1bc,
>+	NVME_SC_ZONE_TOO_MANY_ACTIVE	= 0x1bd,
>+	NVME_SC_ZONE_TOO_MANY_OPEN	= 0x1be,
>+	NVME_SC_ZONE_INVALID_TRANSITION	= 0x1bf,
>+
> 	/*
> 	 * Media and Data Integrity Errors:
> 	 */
>-- 
>2.24.1
>
>
>_______________________________________________
>Linux-nvme mailing list
>Linux-nvme@lists.infradead.org
>http://lists.infradead.org/mailman/listinfo/linux-nvme

Looks good!

Reviewed-by: Javier González <javier.gonz@samsung.com>

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCHv3 3/5] nvme: implement I/O Command Sets Command Set support
  2020-06-23 23:17         ` Sagi Grimberg
@ 2020-06-24 17:25           ` Keith Busch
  2020-06-24 17:46             ` Sagi Grimberg
  0 siblings, 1 reply; 20+ messages in thread
From: Keith Busch @ 2020-06-24 17:25 UTC (permalink / raw)
  To: Sagi Grimberg
  Cc: axboe, Niklas Cassel, Daniel Wagner, Martin K . Petersen,
	Johannes Thumshirn, linux-nvme, linux-block,
	Javier González, hch, Matias Bjorling

On Tue, Jun 23, 2020 at 04:17:30PM -0700, Sagi Grimberg wrote:
> 
> > > > > -		len = nvme_process_ns_desc(ctrl, ids, cur);
> > > > > +		len = nvme_process_ns_desc(ctrl, ids, cur, &csi_seen);
> > > > >    		if (len < 0)
> > > > >    			goto free_data;
> > > > >    		len += sizeof(*cur);
> > > > >    	}
> > > > >    free_data:
> > > > > +	if (!status && nvme_multi_css(ctrl) && !csi_seen) {
> > > > 
> > > > We will clear the status if we detect a path error, that is to
> > > > avoid needlessly removing the ns for path failures, so you should
> > > > check at the goto site.
> > > 
> > > The problem is that this check has to be done after checking all the ns descs,
> > > so this check to be done as the final thing, at least after processing all the
> > > ns descs. No matter if nvme_process_ns_desc() returned an error, or if
> > > simply NVME_NIDT_CSI wasn't part of the ns desc list, so the loop reached the
> > > end without error.
> > > 
> > > Even if the nvme command failed and the status was cleared:
> > > 
> > >                  if (status > 0 && !(status & NVME_SC_DNR))
> > >                          status = 0;
> > 
> > This check is so weird. What has DNR got to do with whether or not we
> > want to continue with this namespace? The commit that adds this says
> > it's to check for a host failed IO, but a controller can just as validly
> > set DNR in its error status, in which case we'd still want clear the
> > status.
> 
> The reason is that if this error is not a DNR error, it means that we
> should retry and succeed, we don't want to remove the namespace.

And what if it is a DNR error? For example, the controller simply
doesn't support this CNS value. While the controller should support it,
we definitely don't need it for NVM command set namespaces.
 
> The problem that this solves is the fact that we have various error
> recovery conditions (interconnect issues, failures, resets) and the
> async works are designed to continue to run, issue I/O and fail. The
> scan work, will revalidate namespaces and remove them if it fails to
> do so, which is inherently wrong if these are simply inaccessible by
> the host at this time.

Relying on a specific bit in the status code seems a bit indirect for
this kind of condition.

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCHv3 3/5] nvme: implement I/O Command Sets Command Set support
  2020-06-24 17:25           ` Keith Busch
@ 2020-06-24 17:46             ` Sagi Grimberg
  2020-06-24 18:03               ` Keith Busch
  0 siblings, 1 reply; 20+ messages in thread
From: Sagi Grimberg @ 2020-06-24 17:46 UTC (permalink / raw)
  To: Keith Busch
  Cc: axboe, Niklas Cassel, Daniel Wagner, Martin K . Petersen,
	Johannes Thumshirn, linux-nvme, linux-block,
	Javier González, hch, Matias Bjorling



On 6/24/20 10:25 AM, Keith Busch wrote:
> On Tue, Jun 23, 2020 at 04:17:30PM -0700, Sagi Grimberg wrote:
>>
>>>>>> -		len = nvme_process_ns_desc(ctrl, ids, cur);
>>>>>> +		len = nvme_process_ns_desc(ctrl, ids, cur, &csi_seen);
>>>>>>     		if (len < 0)
>>>>>>     			goto free_data;
>>>>>>     		len += sizeof(*cur);
>>>>>>     	}
>>>>>>     free_data:
>>>>>> +	if (!status && nvme_multi_css(ctrl) && !csi_seen) {
>>>>>
>>>>> We will clear the status if we detect a path error, that is to
>>>>> avoid needlessly removing the ns for path failures, so you should
>>>>> check at the goto site.
>>>>
>>>> The problem is that this check has to be done after checking all the ns descs,
>>>> so this check to be done as the final thing, at least after processing all the
>>>> ns descs. No matter if nvme_process_ns_desc() returned an error, or if
>>>> simply NVME_NIDT_CSI wasn't part of the ns desc list, so the loop reached the
>>>> end without error.
>>>>
>>>> Even if the nvme command failed and the status was cleared:
>>>>
>>>>                   if (status > 0 && !(status & NVME_SC_DNR))
>>>>                           status = 0;
>>>
>>> This check is so weird. What has DNR got to do with whether or not we
>>> want to continue with this namespace? The commit that adds this says
>>> it's to check for a host failed IO, but a controller can just as validly
>>> set DNR in its error status, in which case we'd still want clear the
>>> status.
>>
>> The reason is that if this error is not a DNR error, it means that we
>> should retry and succeed, we don't want to remove the namespace.
> 
> And what if it is a DNR error? For example, the controller simply
> doesn't support this CNS value. While the controller should support it,
> we definitely don't need it for NVM command set namespaces.

Maybe I mis-undersatnd the comment, but if you see a DNR error, it means
that the controller replied an error and its final, so then you can have
extra checks.

I think the point here is that if we got a non-dnr status, we should not
take any actions on this namespace because we need to retry first
(either because the controller is unavailable or it needs the host to
retry for another reason).

>> The problem that this solves is the fact that we have various error
>> recovery conditions (interconnect issues, failures, resets) and the
>> async works are designed to continue to run, issue I/O and fail. The
>> scan work, will revalidate namespaces and remove them if it fails to
>> do so, which is inherently wrong if these are simply inaccessible by
>> the host at this time.
> 
> Relying on a specific bit in the status code seems a bit indirect for
> this kind of condition.

I actually think this approach covers exactly what we are trying to
achieve. But I'll let others comment on this.

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCHv3 3/5] nvme: implement I/O Command Sets Command Set support
  2020-06-24 17:46             ` Sagi Grimberg
@ 2020-06-24 18:03               ` Keith Busch
  2020-06-24 18:28                 ` Sagi Grimberg
  0 siblings, 1 reply; 20+ messages in thread
From: Keith Busch @ 2020-06-24 18:03 UTC (permalink / raw)
  To: Sagi Grimberg
  Cc: axboe, Niklas Cassel, Daniel Wagner, Martin K . Petersen,
	Johannes Thumshirn, linux-nvme, linux-block,
	Javier González, hch, Matias Bjorling

On Wed, Jun 24, 2020 at 10:46:03AM -0700, Sagi Grimberg wrote:
> On 6/24/20 10:25 AM, Keith Busch wrote:
> > On Tue, Jun 23, 2020 at 04:17:30PM -0700, Sagi Grimberg wrote:
> > And what if it is a DNR error? For example, the controller simply
> > doesn't support this CNS value. While the controller should support it,
> > we definitely don't need it for NVM command set namespaces.
> 
> Maybe I mis-undersatnd the comment, but if you see a DNR error, it means
> that the controller replied an error and its final, so then you can have
> extra checks.

If the controller does not support the CNS value, it may return Invalid
Field with DNR set. That error currently gets propogated back to
nvme_init_ns_head(), which then abandons the namespace. Just as the code
coments say, we had been historically been clearing such errors because
we have other ways to identify the namespace, but now we're not clearing
that error.

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCHv3 3/5] nvme: implement I/O Command Sets Command Set support
  2020-06-24 18:03               ` Keith Busch
@ 2020-06-24 18:28                 ` Sagi Grimberg
  2020-06-24 18:33                   ` Sagi Grimberg
  0 siblings, 1 reply; 20+ messages in thread
From: Sagi Grimberg @ 2020-06-24 18:28 UTC (permalink / raw)
  To: Keith Busch
  Cc: axboe, Niklas Cassel, Daniel Wagner, Martin K . Petersen,
	Johannes Thumshirn, linux-nvme, linux-block,
	Javier González, hch, Matias Bjorling


>>> And what if it is a DNR error? For example, the controller simply
>>> doesn't support this CNS value. While the controller should support it,
>>> we definitely don't need it for NVM command set namespaces.
>>
>> Maybe I mis-undersatnd the comment, but if you see a DNR error, it means
>> that the controller replied an error and its final, so then you can have
>> extra checks.
> 
> If the controller does not support the CNS value, it may return Invalid
> Field with DNR set. That error currently gets propogated back to
> nvme_init_ns_head(), which then abandons the namespace. Just as the code
> coments say, we had been historically been clearing such errors because
> we have other ways to identify the namespace, but now we're not clearing
> that error.

I don't understand what you are saying Keith.

My comment was for this block:
--
	if (!status && nvme_multi_css(ctrl) && !csi_seen) {
		dev_warn(ctrl->device, "Command set not reported for nsid:%d\n",
			 nsid);
		status = -EINVAL;
	}
--

I was saying that !status doesn't necessarily mean success, but it can
also mean that its an retry-able error status (due to transport or
controller). If we see a retry-able error we should still clear the
status so we don't abandon the namespace.

This for example would achieve that:
--
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index ba512c45b40f..46d8a8379aff 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -1126,12 +1126,21 @@ static int nvme_identify_ns_descs(struct 
nvme_ctrl *ctrl, unsigned nsid,
         if (status) {
                 dev_warn(ctrl->device,
                         "Identify Descriptors failed (%d)\n", status);
-                /*
-                 * Don't treat an error as fatal, as we potentially already
-                 * have a NGUID or EUI-64.
-                 */
-               if (status > 0 && !(status & NVME_SC_DNR))
-                       status = 0;
+
+               if (status > 0 && !(status & NVME_SC_DNR)) {
+                       if (nvme_multi_css(ctrl) && !csi_seen) {
+                               dev_warn(ctrl->device,
+                                       "Command set not reported for 
nsid:%d\n",
+                                       nsid);
+                               status = -EINVAL;
+                       } else {
+                               /*
+                                * Don't treat an error as fatal, as we
+                                * potentially already have a NGUID or 
EUI-64.
+                                */
+                               status = 0;
+                       }
+               }
                 goto free_data;
         }
--

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [PATCHv3 3/5] nvme: implement I/O Command Sets Command Set support
  2020-06-24 18:28                 ` Sagi Grimberg
@ 2020-06-24 18:33                   ` Sagi Grimberg
  2020-06-24 18:40                     ` Keith Busch
  0 siblings, 1 reply; 20+ messages in thread
From: Sagi Grimberg @ 2020-06-24 18:33 UTC (permalink / raw)
  To: Keith Busch
  Cc: axboe, Niklas Cassel, Daniel Wagner, Martin K . Petersen,
	Johannes Thumshirn, linux-nvme, linux-block,
	Javier González, hch, Matias Bjorling


>> If the controller does not support the CNS value, it may return Invalid
>> Field with DNR set. That error currently gets propogated back to
>> nvme_init_ns_head(), which then abandons the namespace. Just as the code
>> coments say, we had been historically been clearing such errors because
>> we have other ways to identify the namespace, but now we're not clearing
>> that error.
> 
> I don't understand what you are saying Keith.
> 
> My comment was for this block:
> -- 
>      if (!status && nvme_multi_css(ctrl) && !csi_seen) {
>          dev_warn(ctrl->device, "Command set not reported for nsid:%d\n",
>               nsid);
>          status = -EINVAL;
>      }
> -- 
> 
> I was saying that !status doesn't necessarily mean success, but it can
> also mean that its an retry-able error status (due to transport or
> controller). If we see a retry-able error we should still clear the
> status so we don't abandon the namespace.
> 
> This for example would achieve that:

Sorry, meant this:
--
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index ba512c45b40f..3187cf768d08 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -1130,8 +1130,14 @@ static int nvme_identify_ns_descs(struct 
nvme_ctrl *ctrl, unsigned nsid,
                   * Don't treat an error as fatal, as we potentially 
already
                   * have a NGUID or EUI-64.
                   */
-               if (status > 0 && !(status & NVME_SC_DNR))
+               if (status > 0 && !(status & NVME_SC_DNR)) {
                         status = 0;
+               } else if (status == 0 && nvme_multi_css(ctrl) && 
!csi_seen) {
+                               dev_warn(ctrl->device,
+                                       "Command set not reported for 
nsid:%d\n",
+                                       nsid);
+                               status = -EINVAL;
+               }
                 goto free_data;
         }
--

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [PATCHv3 3/5] nvme: implement I/O Command Sets Command Set support
  2020-06-24 18:33                   ` Sagi Grimberg
@ 2020-06-24 18:40                     ` Keith Busch
  2020-06-24 19:03                       ` Sagi Grimberg
  0 siblings, 1 reply; 20+ messages in thread
From: Keith Busch @ 2020-06-24 18:40 UTC (permalink / raw)
  To: Sagi Grimberg
  Cc: axboe, Niklas Cassel, Daniel Wagner, Martin K . Petersen,
	Johannes Thumshirn, linux-nvme, linux-block,
	Javier González, hch, Matias Bjorling

On Wed, Jun 24, 2020 at 11:33:25AM -0700, Sagi Grimberg wrote:
> > > If the controller does not support the CNS value, it may return Invalid
> > > Field with DNR set. That error currently gets propogated back to
> > > nvme_init_ns_head(), which then abandons the namespace. Just as the code
> > > coments say, we had been historically been clearing such errors because
> > > we have other ways to identify the namespace, but now we're not clearing
> > > that error.
> > 
> > I don't understand what you are saying Keith.
> > 
> > My comment was for this block:
> > -- 
> >      if (!status && nvme_multi_css(ctrl) && !csi_seen) {
> >          dev_warn(ctrl->device, "Command set not reported for nsid:%d\n",
> >               nsid);
> >          status = -EINVAL;
> >      }
> > -- 
> > 
> > I was saying that !status doesn't necessarily mean success, but it can
> > also mean that its an retry-able error status (due to transport or
> > controller). If we see a retry-able error we should still clear the
> > status so we don't abandon the namespace.
> > 
> > This for example would achieve that:

We're not talking about the same thing. I am only talking about what
introduced the DNR check, from commit 59c7c3caaaf87.

I know you added it because you want to abort comparing identifiers on a
rescan when retrieving the identifiers failed. That's fine, but I am
saying this fails namespace creation in the first place for some types
of devices that used to succeed.

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCHv3 3/5] nvme: implement I/O Command Sets Command Set support
  2020-06-24 18:40                     ` Keith Busch
@ 2020-06-24 19:03                       ` Sagi Grimberg
  2020-06-24 21:49                         ` Keith Busch
  0 siblings, 1 reply; 20+ messages in thread
From: Sagi Grimberg @ 2020-06-24 19:03 UTC (permalink / raw)
  To: Keith Busch
  Cc: axboe, Niklas Cassel, Daniel Wagner, Martin K . Petersen,
	Johannes Thumshirn, linux-nvme, linux-block,
	Javier González, hch, Matias Bjorling


>>>> If the controller does not support the CNS value, it may return Invalid
>>>> Field with DNR set. That error currently gets propogated back to
>>>> nvme_init_ns_head(), which then abandons the namespace. Just as the code
>>>> coments say, we had been historically been clearing such errors because
>>>> we have other ways to identify the namespace, but now we're not clearing
>>>> that error.
>>>
>>> I don't understand what you are saying Keith.
>>>
>>> My comment was for this block:
>>> -- 
>>>       if (!status && nvme_multi_css(ctrl) && !csi_seen) {
>>>           dev_warn(ctrl->device, "Command set not reported for nsid:%d\n",
>>>                nsid);
>>>           status = -EINVAL;
>>>       }
>>> -- 
>>>
>>> I was saying that !status doesn't necessarily mean success, but it can
>>> also mean that its an retry-able error status (due to transport or
>>> controller). If we see a retry-able error we should still clear the
>>> status so we don't abandon the namespace.
>>>
>>> This for example would achieve that:
> 
> We're not talking about the same thing. I am only talking about what
> introduced the DNR check, from commit 59c7c3caaaf87.
> 
> I know you added it because you want to abort comparing identifiers on a
> rescan when retrieving the identifiers failed. That's fine, but I am
> saying this fails namespace creation in the first place for some types
> of devices that used to succeed.

OK, now I think I understand (thanks for clarifying that the discussion
is not on patch 3/5, but rather on 59c7c3caaaf87).

So the original proposal was to check NVME_SC_HOST_PATH_ERROR (and now
we have NVME_SC_HOST_ABORTED_CMD) but with the review Christoph gave
it seemed to make more sense that we generalize and check the DNR bit.

We could restrict it back to checking the status is
NVME_SC_HOST_PATH_ERROR or NVME_SC_HOST_ABORTED_CMD if you think it
creates problems. However, if we keep the code as is, the original
comment still holds.

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCHv3 3/5] nvme: implement I/O Command Sets Command Set support
  2020-06-24 19:03                       ` Sagi Grimberg
@ 2020-06-24 21:49                         ` Keith Busch
  2020-06-24 22:54                           ` Sagi Grimberg
  0 siblings, 1 reply; 20+ messages in thread
From: Keith Busch @ 2020-06-24 21:49 UTC (permalink / raw)
  To: Sagi Grimberg
  Cc: axboe, Niklas Cassel, Daniel Wagner, Martin K . Petersen,
	Johannes Thumshirn, linux-nvme, linux-block,
	Javier González, hch, Matias Bjorling

On Wed, Jun 24, 2020 at 12:03:41PM -0700, Sagi Grimberg wrote:
> 
> > > > > If the controller does not support the CNS value, it may return Invalid
> > > > > Field with DNR set. That error currently gets propogated back to
> > > > > nvme_init_ns_head(), which then abandons the namespace. Just as the code
> > > > > coments say, we had been historically been clearing such errors because
> > > > > we have other ways to identify the namespace, but now we're not clearing
> > > > > that error.
> > > > 
> > > > I don't understand what you are saying Keith.
> > > > 
> > > > My comment was for this block:
> > > > -- 
> > > >       if (!status && nvme_multi_css(ctrl) && !csi_seen) {
> > > >           dev_warn(ctrl->device, "Command set not reported for nsid:%d\n",
> > > >                nsid);
> > > >           status = -EINVAL;
> > > >       }
> > > > -- 
> > > > 
> > > > I was saying that !status doesn't necessarily mean success, but it can
> > > > also mean that its an retry-able error status (due to transport or
> > > > controller). If we see a retry-able error we should still clear the
> > > > status so we don't abandon the namespace.
> > > > 
> > > > This for example would achieve that:
> > 
> > We're not talking about the same thing. I am only talking about what
> > introduced the DNR check, from commit 59c7c3caaaf87.
> > 
> > I know you added it because you want to abort comparing identifiers on a
> > rescan when retrieving the identifiers failed. That's fine, but I am
> > saying this fails namespace creation in the first place for some types
> > of devices that used to succeed.
> 
> OK, now I think I understand (thanks for clarifying that the discussion
> is not on patch 3/5, but rather on 59c7c3caaaf87).
> 
> So the original proposal was to check NVME_SC_HOST_PATH_ERROR (and now
> we have NVME_SC_HOST_ABORTED_CMD) but with the review Christoph gave
> it seemed to make more sense that we generalize and check the DNR bit.

Okay, I didn't question this approach when it first went through, so
sorry about this digression, but I really don't get how this DNR check
helps at all.

The code currently returns an error if DNR is set. Based on the commit
messages, it sounds like you need that error to skip comparing
identifiers through nvme's scan_work calling revalidate_disk():
suppressing the error has revalidate_disk() fail with -ENODEV when
comparing identifiers fails.

Since we do return the error when DNR is set, we skip comparing
identifiers and return blk_status_to_errno(nvme_error_status(ret))
instead. How is that errno an improvement?

And then if DNR is not set, we suppress the error and proceed with
comparing identifiers??

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCHv3 3/5] nvme: implement I/O Command Sets Command Set support
  2020-06-24 21:49                         ` Keith Busch
@ 2020-06-24 22:54                           ` Sagi Grimberg
  2020-06-24 23:54                             ` Keith Busch
  0 siblings, 1 reply; 20+ messages in thread
From: Sagi Grimberg @ 2020-06-24 22:54 UTC (permalink / raw)
  To: Keith Busch
  Cc: axboe, Niklas Cassel, Daniel Wagner, Martin K . Petersen,
	Johannes Thumshirn, linux-nvme, linux-block,
	Javier González, hch, Matias Bjorling



On 6/24/20 2:49 PM, Keith Busch wrote:
> On Wed, Jun 24, 2020 at 12:03:41PM -0700, Sagi Grimberg wrote:
>>
>>>>>> If the controller does not support the CNS value, it may return Invalid
>>>>>> Field with DNR set. That error currently gets propogated back to
>>>>>> nvme_init_ns_head(), which then abandons the namespace. Just as the code
>>>>>> coments say, we had been historically been clearing such errors because
>>>>>> we have other ways to identify the namespace, but now we're not clearing
>>>>>> that error.
>>>>>
>>>>> I don't understand what you are saying Keith.
>>>>>
>>>>> My comment was for this block:
>>>>> -- 
>>>>>        if (!status && nvme_multi_css(ctrl) && !csi_seen) {
>>>>>            dev_warn(ctrl->device, "Command set not reported for nsid:%d\n",
>>>>>                 nsid);
>>>>>            status = -EINVAL;
>>>>>        }
>>>>> -- 
>>>>>
>>>>> I was saying that !status doesn't necessarily mean success, but it can
>>>>> also mean that its an retry-able error status (due to transport or
>>>>> controller). If we see a retry-able error we should still clear the
>>>>> status so we don't abandon the namespace.
>>>>>
>>>>> This for example would achieve that:
>>>
>>> We're not talking about the same thing. I am only talking about what
>>> introduced the DNR check, from commit 59c7c3caaaf87.
>>>
>>> I know you added it because you want to abort comparing identifiers on a
>>> rescan when retrieving the identifiers failed. That's fine, but I am
>>> saying this fails namespace creation in the first place for some types
>>> of devices that used to succeed.
>>
>> OK, now I think I understand (thanks for clarifying that the discussion
>> is not on patch 3/5, but rather on 59c7c3caaaf87).
>>
>> So the original proposal was to check NVME_SC_HOST_PATH_ERROR (and now
>> we have NVME_SC_HOST_ABORTED_CMD) but with the review Christoph gave
>> it seemed to make more sense that we generalize and check the DNR bit.
> 
> Okay, I didn't question this approach when it first went through, so
> sorry about this digression, but I really don't get how this DNR check
> helps at all.
> 
> The code currently returns an error if DNR is set.

Right.

> Based on the commit
> messages, it sounds like you need that error to skip comparing
> identifiers through nvme's scan_work calling revalidate_disk():
> suppressing the error has revalidate_disk() fail with -ENODEV when
> comparing identifiers fails.

Your understanding is correct.

> Since we do return the error when DNR is set, we skip comparing
> identifiers and return blk_status_to_errno(nvme_error_status(ret))
> instead. How is that errno an improvement?

See nvme_revalidate_disk:
out:
         /*
          * Only fail the function if we got a fatal error back from the
          * device, otherwise ignore the error and just move on.
          */
         if (ret == -ENOMEM || (ret > 0 && !(ret & NVME_SC_DNR)))
                 ret = 0;
         else if (ret > 0)
                 ret = blk_status_to_errno(nvme_error_status(ret));
         return ret;

So we don't actually propagate the error back if its a non-DNR (hence
retry-able error). The errno was there before in order to not leak NVMe
errors to the block layer.

> And then if DNR is not set, we suppress the error and proceed with
> comparing identifiers??

Wait, I think that I re-read it it's backwards. The intent was to indeed
clear the error for the DNR case and propagate the error for the non-DNR
case!

The code needs to be:
--
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 2afed32d3892..3e84ab6c2bd3 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -1130,7 +1130,7 @@ static int nvme_identify_ns_descs(struct nvme_ctrl 
*ctrl, unsigned nsid,
                   * Don't treat an error as fatal, as we potentially 
already
                   * have a NGUID or EUI-64.
                   */
-               if (status > 0 && !(status & NVME_SC_DNR))
+               if (status > 0 && (status & NVME_SC_DNR))
                         status = 0;
                 goto free_data;
         }
--

This way, if the controller failed the identify it will be with DNR
status and we will silently ignore, and if the transport failed its
a non-DNR status, and we propagate the status back, skip the id compare,
and then silently ignore the error in nvme_revalidate_disk (as above).

Looking into the original fix we had internally, this was the case, and
got fat-fingered in I can only assume...

Will send a fix right away, good catch keith!

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [PATCHv3 3/5] nvme: implement I/O Command Sets Command Set support
  2020-06-24 22:54                           ` Sagi Grimberg
@ 2020-06-24 23:54                             ` Keith Busch
  0 siblings, 0 replies; 20+ messages in thread
From: Keith Busch @ 2020-06-24 23:54 UTC (permalink / raw)
  To: Sagi Grimberg
  Cc: axboe, Niklas Cassel, Daniel Wagner, Martin K . Petersen,
	Johannes Thumshirn, linux-nvme, linux-block,
	Javier González, hch, Matias Bjorling

On Wed, Jun 24, 2020 at 03:54:05PM -0700, Sagi Grimberg wrote:
> The code needs to be:
> --
> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> index 2afed32d3892..3e84ab6c2bd3 100644
> --- a/drivers/nvme/host/core.c
> +++ b/drivers/nvme/host/core.c
> @@ -1130,7 +1130,7 @@ static int nvme_identify_ns_descs(struct nvme_ctrl
> *ctrl, unsigned nsid,
>                   * Don't treat an error as fatal, as we potentially already
>                   * have a NGUID or EUI-64.
>                   */
> -               if (status > 0 && !(status & NVME_SC_DNR))
> +               if (status > 0 && (status & NVME_SC_DNR))
>                         status = 0;
>                 goto free_data;
>         }
> --

Aha, I was assuming the code was the way you wanted it, hence my
confusion :)

The above makes sense walking through different scenarios. I needed
to reconcile this in order to understand how we'll address it with
Nikla's patch that start this conversation.

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCHv3 3/5] nvme: implement I/O Command Sets Command Set support
       [not found]   ` <69e8e88c-097b-368d-58f4-85d11110386d@grimberg.me>
       [not found]     ` <20200623112551.GB117742@localhost.localdomain>
@ 2020-06-26  8:54     ` Christoph Hellwig
  1 sibling, 0 replies; 20+ messages in thread
From: Christoph Hellwig @ 2020-06-26  8:54 UTC (permalink / raw)
  To: Sagi Grimberg
  Cc: axboe, Niklas Cassel, Daniel Wagner, Martin K . Petersen,
	Johannes Thumshirn, linux-nvme, linux-block, Keith Busch,
	Javier González, hch, Matias Bjørling

On Tue, Jun 23, 2020 at 01:53:47AM -0700, Sagi Grimberg wrote:
>>   	if (ns->lba_shift == 0)
>>   		ns->lba_shift = 9;
>>   +	switch (ns->head->ids.csi) {
>> +	case NVME_CSI_NVM:
>> +		break;
>> +	default:
>> +		dev_warn(ctrl->device, "unknown csi:%d ns:%d\n",
>> +			ns->head->ids.csi, ns->head->ns_id);
>> +		return -ENODEV;
>> +	}
>
> Not sure we need a switch-case statement for a single case target...

I think a switch makes inherent sense when there is an identifier that
can have multiple values, even if there only is one for now.

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCHv3 1/5] block: add capacity field to zone descriptors
       [not found] ` <20200622162530.1287650-2-kbusch@kernel.org>
@ 2020-06-26 12:17   ` Jens Axboe
  0 siblings, 0 replies; 20+ messages in thread
From: Jens Axboe @ 2020-06-26 12:17 UTC (permalink / raw)
  To: Keith Busch
  Cc: sagig, Chaitanya Kulkarni, Johannes Thumshirn, Daniel Wagner,
	Linux NVMe, linux-block, Martin K . Petersen,
	Javier González, Christoph Hellwig, Matias Bjørling

On Mon, Jun 22, 2020 at 10:25 AM Keith Busch <kbusch@kernel.org> wrote:
>
> From: Matias Bjørling <matias.bjorling@wdc.com>
>
> In the zoned storage model, the sectors within a zone are typically all
> writeable. With the introduction of the Zoned Namespace (ZNS) Command
> Set in the NVM Express organization, the model was extended to have a
> specific writeable capacity.
>
> Extend the zone descriptor data structure with a zone capacity field to
> indicate to the user how many sectors in a zone are writeable.
>
> Introduce backward compatibility in the zone report ioctl by extending
> the zone report header data structure with a flags field to indicate if
> the capacity field is available.

Reviewed-by: Jens Axboe <axboe@kernel.dk>

-- 
Jens Axboe


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCHv3 5/5] nvme: support for zoned namespaces
       [not found] ` <20200622162530.1287650-6-kbusch@kernel.org>
  2020-06-23 17:45   ` [PATCHv3 5/5] nvme: support for zoned namespaces Sagi Grimberg
  2020-06-24  9:11   ` Javier González
@ 2020-06-29 13:53   ` Johannes Thumshirn
  2 siblings, 0 replies; 20+ messages in thread
From: Johannes Thumshirn @ 2020-06-29 13:53 UTC (permalink / raw)
  To: Keith Busch, linux-nvme, hch, sagi, linux-block, axboe
  Cc: Niklas Cassel, Damien Le Moal, Ajay Joshi, Keith Busch,
	Martin K . Petersen, Dmitry Fomichev, Aravind Ramesh,
	Hans Holmberg, Matias Bjorling

On 22/06/2020 18:25, Keith Busch wrote:
> +			le_to_cpu16(id->zoc), ns->head->ns_id);

That should probably be le16_to_cpu()

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2020-06-29 13:53 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <20200622162530.1287650-1-kbusch@kernel.org>
     [not found] ` <20200622162530.1287650-5-kbusch@kernel.org>
2020-06-23 17:43   ` [PATCHv3 4/5] nvme: support for multi-command set effects Sagi Grimberg
     [not found] ` <20200622162530.1287650-4-kbusch@kernel.org>
     [not found]   ` <69e8e88c-097b-368d-58f4-85d11110386d@grimberg.me>
     [not found]     ` <20200623112551.GB117742@localhost.localdomain>
2020-06-23 14:59       ` [PATCHv3 3/5] nvme: implement I/O Command Sets Command Set support Keith Busch
2020-06-23 22:10       ` Keith Busch
2020-06-23 23:17         ` Sagi Grimberg
2020-06-24 17:25           ` Keith Busch
2020-06-24 17:46             ` Sagi Grimberg
2020-06-24 18:03               ` Keith Busch
2020-06-24 18:28                 ` Sagi Grimberg
2020-06-24 18:33                   ` Sagi Grimberg
2020-06-24 18:40                     ` Keith Busch
2020-06-24 19:03                       ` Sagi Grimberg
2020-06-24 21:49                         ` Keith Busch
2020-06-24 22:54                           ` Sagi Grimberg
2020-06-24 23:54                             ` Keith Busch
2020-06-23 23:20       ` Sagi Grimberg
2020-06-26  8:54     ` Christoph Hellwig
     [not found] ` <20200622162530.1287650-2-kbusch@kernel.org>
2020-06-26 12:17   ` [PATCHv3 1/5] block: add capacity field to zone descriptors Jens Axboe
     [not found] ` <20200622162530.1287650-6-kbusch@kernel.org>
2020-06-23 17:45   ` [PATCHv3 5/5] nvme: support for zoned namespaces Sagi Grimberg
2020-06-24  9:11   ` Javier González
2020-06-29 13:53   ` Johannes Thumshirn

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).