* [PATCH V10 1/8] nvmet: trim args for nvmet_copy_ns_identifier()
2021-03-09 4:58 [PATCH V10 0/8] nvmet: add ZBD backend support Chaitanya Kulkarni
@ 2021-03-09 4:58 ` Chaitanya Kulkarni
2021-03-09 11:34 ` Christoph Hellwig
2021-03-09 4:58 ` [PATCH V10 2/8] nvmet: add NVM Command Set Identifier support Chaitanya Kulkarni
` (6 subsequent siblings)
7 siblings, 1 reply; 25+ messages in thread
From: Chaitanya Kulkarni @ 2021-03-09 4:58 UTC (permalink / raw)
To: linux-nvme; +Cc: hch, kbusch, sagi, damien.lemoal, Chaitanya Kulkarni
The function nvmet_copy_ns_identifier() takes type length and pointer
to the actual id. These parameters can be derived from other parameters
such id can be derived from req->ns->XXXid and len can be derived
from type since it has 1:1 mapping.
Remove the len and id arguments and derived those from type and
req->ns->XXXid respectively.
Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
---
drivers/nvme/target/admin-cmd.c | 36 ++++++++++++++++++++-------------
1 file changed, 22 insertions(+), 14 deletions(-)
diff --git a/drivers/nvme/target/admin-cmd.c b/drivers/nvme/target/admin-cmd.c
index fe6b8aa90b53..e6caa35db4d5 100644
--- a/drivers/nvme/target/admin-cmd.c
+++ b/drivers/nvme/target/admin-cmd.c
@@ -589,24 +589,36 @@ static void nvmet_execute_identify_nslist(struct nvmet_req *req)
nvmet_req_complete(req, status);
}
-static u16 nvmet_copy_ns_identifier(struct nvmet_req *req, u8 type, u8 len,
- void *id, off_t *off)
+static u16 nvmet_copy_ns_identifier(struct nvmet_req *req, u8 type, off_t *off)
{
- struct nvme_ns_id_desc desc = {
- .nidt = type,
- .nidl = len,
- };
+ struct nvme_ns_id_desc desc = { };
+ void *id;
u16 status;
+ switch (type) {
+ case NVME_NIDT_UUID:
+ desc.nidl = NVME_NIDT_UUID_LEN;
+ desc.nidt = NVME_NIDT_UUID;
+ id = &req->ns->uuid;
+ break;
+ case NVME_NIDT_NGUID:
+ desc.nidl = NVME_NIDT_NGUID_LEN;
+ desc.nidt = NVME_NIDT_NGUID;
+ id = &req->ns->nguid;
+ break;
+ default:
+ return NVME_SC_INTERNAL;
+ }
+
status = nvmet_copy_to_sgl(req, *off, &desc, sizeof(desc));
if (status)
return status;
*off += sizeof(desc);
- status = nvmet_copy_to_sgl(req, *off, id, len);
+ status = nvmet_copy_to_sgl(req, *off, id, desc.nidl);
if (status)
return status;
- *off += len;
+ *off += desc.nidl;
return 0;
}
@@ -621,16 +633,12 @@ static void nvmet_execute_identify_desclist(struct nvmet_req *req)
goto out;
if (memchr_inv(&req->ns->uuid, 0, sizeof(req->ns->uuid))) {
- status = nvmet_copy_ns_identifier(req, NVME_NIDT_UUID,
- NVME_NIDT_UUID_LEN,
- &req->ns->uuid, &off);
+ status = nvmet_copy_ns_identifier(req, NVME_NIDT_UUID, &off);
if (status)
goto out;
}
if (memchr_inv(req->ns->nguid, 0, sizeof(req->ns->nguid))) {
- status = nvmet_copy_ns_identifier(req, NVME_NIDT_NGUID,
- NVME_NIDT_NGUID_LEN,
- &req->ns->nguid, &off);
+ status = nvmet_copy_ns_identifier(req, NVME_NIDT_NGUID, &off);
if (status)
goto out;
}
--
2.22.1
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply related [flat|nested] 25+ messages in thread
* Re: [PATCH V10 1/8] nvmet: trim args for nvmet_copy_ns_identifier()
2021-03-09 4:58 ` [PATCH V10 1/8] nvmet: trim args for nvmet_copy_ns_identifier() Chaitanya Kulkarni
@ 2021-03-09 11:34 ` Christoph Hellwig
2021-03-09 20:54 ` Chaitanya Kulkarni
2021-03-09 21:03 ` Chaitanya Kulkarni
0 siblings, 2 replies; 25+ messages in thread
From: Christoph Hellwig @ 2021-03-09 11:34 UTC (permalink / raw)
To: Chaitanya Kulkarni; +Cc: linux-nvme, hch, kbusch, sagi, damien.lemoal
This looks really strange to me. The copy function should just do
the grunt work, while the policy should stay in the callers.
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH V10 1/8] nvmet: trim args for nvmet_copy_ns_identifier()
2021-03-09 11:34 ` Christoph Hellwig
@ 2021-03-09 20:54 ` Chaitanya Kulkarni
2021-03-09 21:03 ` Chaitanya Kulkarni
1 sibling, 0 replies; 25+ messages in thread
From: Chaitanya Kulkarni @ 2021-03-09 20:54 UTC (permalink / raw)
To: Christoph Hellwig; +Cc: linux-nvme, kbusch, sagi, Damien Le Moal
On 3/9/21 03:34, Christoph Hellwig wrote:
> This looks really strange to me. The copy function should just do
> the grunt work, while the policy should stay in the callers.
>
I'll drop this patch.
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH V10 1/8] nvmet: trim args for nvmet_copy_ns_identifier()
2021-03-09 11:34 ` Christoph Hellwig
2021-03-09 20:54 ` Chaitanya Kulkarni
@ 2021-03-09 21:03 ` Chaitanya Kulkarni
2021-03-10 8:49 ` Christoph Hellwig
1 sibling, 1 reply; 25+ messages in thread
From: Chaitanya Kulkarni @ 2021-03-09 21:03 UTC (permalink / raw)
To: Christoph Hellwig; +Cc: linux-nvme, kbusch, sagi, Damien Le Moal
On 3/9/21 03:34, Christoph Hellwig wrote:
> This looks really strange to me. The copy function should just do
> the grunt work, while the policy should stay in the callers.
>
Looking at the code again, this is not a generic copy function but this
is a designated function for the identifier copy, and for each
identifier we have a length parameter that has 1:1 mapping [1], so why
pass that values for each call ?
Anyways if it is going to make code hard to read or maintain I'll drop.
Please let me know.
[1] Identifier to len 1:1 mapping from include/linux/nvme.h :-
#define NVME_NIDT_EUI64_LEN 8
#define NVME_NIDT_NGUID_LEN 16
#define NVME_NIDT_UUID_LEN 16
#define NVME_NIDT_CSI_LEN 1
enum
{
NVME_NIDT_EUI64 = 0x01,
NVME_NIDT_NGUID = 0x02,
NVME_NIDT_UUID = 0x03,
NVME_NIDT_CSI = 0x04,
};
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH V10 1/8] nvmet: trim args for nvmet_copy_ns_identifier()
2021-03-09 21:03 ` Chaitanya Kulkarni
@ 2021-03-10 8:49 ` Christoph Hellwig
2021-03-10 8:52 ` Chaitanya Kulkarni
0 siblings, 1 reply; 25+ messages in thread
From: Christoph Hellwig @ 2021-03-10 8:49 UTC (permalink / raw)
To: Chaitanya Kulkarni
Cc: Christoph Hellwig, linux-nvme, kbusch, sagi, Damien Le Moal
On Tue, Mar 09, 2021 at 09:03:26PM +0000, Chaitanya Kulkarni wrote:
> Looking at the code again, this is not a generic copy function but this
> is a designated function for the identifier copy, and for each
> identifier we have a length parameter that has 1:1 mapping [1], so why
> pass that values for each call ?
The point is to keep the levels of abstraction sane. Right now
nvmet_copy_ns_identifier is a dumb copy function that doesn't know
about the type of the identifier copied, and the caller has all
information about each identifier, nicely together in a single line.
If you move the size derivation into nvmet_copy_ns_identifier everyone
adding a new identifier or just trying to understand the code needs
to deal with two functions. Due to the extra branches it will probably
also generate worse binary code.
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH V10 1/8] nvmet: trim args for nvmet_copy_ns_identifier()
2021-03-10 8:49 ` Christoph Hellwig
@ 2021-03-10 8:52 ` Chaitanya Kulkarni
0 siblings, 0 replies; 25+ messages in thread
From: Chaitanya Kulkarni @ 2021-03-10 8:52 UTC (permalink / raw)
To: Christoph Hellwig; +Cc: linux-nvme, kbusch, sagi, Damien Le Moal
> On Mar 10, 2021, at 12:49 AM, Christoph Hellwig <hch@lst.de> wrote:
>
> On Tue, Mar 09, 2021 at 09:03:26PM +0000, Chaitanya Kulkarni wrote:
>> Looking at the code again, this is not a generic copy function but this
>> is a designated function for the identifier copy, and for each
>> identifier we have a length parameter that has 1:1 mapping [1], so why
>> pass that values for each call ?
>
> The point is to keep the levels of abstraction sane. Right now
> nvmet_copy_ns_identifier is a dumb copy function that doesn't know
> about the type of the identifier copied, and the caller has all
> information about each identifier, nicely together in a single line.
> If you move the size derivation into nvmet_copy_ns_identifier everyone
> adding a new identifier or just trying to understand the code needs
> to deal with two functions. Due to the extra branches it will probably
> also generate worse binary code.
I see thanks for the explanation. Now I understand it better, will drop that patch.
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply [flat|nested] 25+ messages in thread
* [PATCH V10 2/8] nvmet: add NVM Command Set Identifier support
2021-03-09 4:58 [PATCH V10 0/8] nvmet: add ZBD backend support Chaitanya Kulkarni
2021-03-09 4:58 ` [PATCH V10 1/8] nvmet: trim args for nvmet_copy_ns_identifier() Chaitanya Kulkarni
@ 2021-03-09 4:58 ` Chaitanya Kulkarni
2021-03-09 11:37 ` Christoph Hellwig
2021-03-09 4:58 ` [PATCH V10 3/8] nvmet: add command set supported ctrl cap Chaitanya Kulkarni
` (5 subsequent siblings)
7 siblings, 1 reply; 25+ messages in thread
From: Chaitanya Kulkarni @ 2021-03-09 4:58 UTC (permalink / raw)
To: linux-nvme; +Cc: hch, kbusch, sagi, damien.lemoal, Chaitanya Kulkarni
NVMe TP 4056 allows controller to support different command sets.
NVMeoF target currently only supports namespaces that contain
traditional logical blocks that may be randomly read and written. In
some applications there is a value in exposing namespaces that contain
logical blocks that have special access rules (e.g. sequentially write
required namespace such as Zoned Namespace (ZNS)).
In order to support the Zoned Block Devices (ZBD) backend, controller
needs to have support for ZNS Command Set Identifier (CSI).
In this preparation patch, we adjust the code such that it can now
support default command set identifier. We update the namespace data
structure to store the CSI value which defaults to NVME_CSI_NVM
which represents traditional logical blocks namespace type.
The CSI support is required to implement the ZBD backend over NVMe ZNS
interface, since ZNS commands belongs to the different command set than
the default one.
Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
---
drivers/nvme/target/admin-cmd.c | 78 +++++++++++++++++++++++++++++----
drivers/nvme/target/core.c | 1 +
drivers/nvme/target/nvmet.h | 1 +
include/linux/nvme.h | 1 +
4 files changed, 72 insertions(+), 9 deletions(-)
diff --git a/drivers/nvme/target/admin-cmd.c b/drivers/nvme/target/admin-cmd.c
index e6caa35db4d5..76d9fc0f48c2 100644
--- a/drivers/nvme/target/admin-cmd.c
+++ b/drivers/nvme/target/admin-cmd.c
@@ -162,15 +162,8 @@ static void nvmet_execute_get_log_page_smart(struct nvmet_req *req)
nvmet_req_complete(req, status);
}
-static void nvmet_execute_get_log_cmd_effects_ns(struct nvmet_req *req)
+static void nvmet_set_csi_nvm_effects(struct nvme_effects_log *log)
{
- u16 status = NVME_SC_INTERNAL;
- struct nvme_effects_log *log;
-
- log = kzalloc(sizeof(*log), GFP_KERNEL);
- if (!log)
- goto out;
-
log->acs[nvme_admin_get_log_page] = cpu_to_le32(1 << 0);
log->acs[nvme_admin_identify] = cpu_to_le32(1 << 0);
log->acs[nvme_admin_abort_cmd] = cpu_to_le32(1 << 0);
@@ -184,8 +177,45 @@ static void nvmet_execute_get_log_cmd_effects_ns(struct nvmet_req *req)
log->iocs[nvme_cmd_flush] = cpu_to_le32(1 << 0);
log->iocs[nvme_cmd_dsm] = cpu_to_le32(1 << 0);
log->iocs[nvme_cmd_write_zeroes] = cpu_to_le32(1 << 0);
+}
- status = nvmet_copy_to_sgl(req, 0, log, sizeof(*log));
+static u16 nvmet_set_csi_zns_effects(struct nvme_effects_log *log)
+{
+ if (!IS_ENABLED(CONFIG_BLK_DEV_ZONED))
+ return NVME_SC_INVALID_IO_CMD_SET;
+
+ log->iocs[nvme_cmd_zone_append] = cpu_to_le32(1 << 0);
+ log->iocs[nvme_cmd_zone_mgmt_send] = cpu_to_le32(1 << 0);
+ log->iocs[nvme_cmd_zone_mgmt_recv] = cpu_to_le32(1 << 0);
+
+ return NVME_SC_SUCCESS;
+}
+
+static void nvmet_execute_get_log_cmd_effects_ns(struct nvmet_req *req)
+{
+ struct nvme_effects_log *log;
+ u16 status = NVME_SC_SUCCESS;
+
+ log = kzalloc(sizeof(*log), GFP_KERNEL);
+ if (!log) {
+ status = NVME_SC_INTERNAL;
+ goto out;
+ }
+
+ switch (req->cmd->get_log_page.csi) {
+ case NVME_CSI_NVM:
+ nvmet_set_csi_nvm_effects(log);
+ break;
+ case NVME_CSI_ZNS:
+ status = nvmet_set_csi_zns_effects(log);
+ break;
+ default:
+ status = NVME_SC_INVALID_LOG_PAGE;
+ break;
+ }
+
+ if (status == NVME_SC_SUCCESS)
+ status = nvmet_copy_to_sgl(req, 0, log, sizeof(*log));
kfree(log);
out:
@@ -606,6 +636,11 @@ static u16 nvmet_copy_ns_identifier(struct nvmet_req *req, u8 type, off_t *off)
desc.nidt = NVME_NIDT_NGUID;
id = &req->ns->nguid;
break;
+ case NVME_NIDT_CSI:
+ desc.nidl = NVME_NIDT_CSI_LEN;
+ desc.nidt = NVME_NIDT_CSI;
+ id = &req->ns->csi;
+ break;
default:
return NVME_SC_INTERNAL;
}
@@ -623,6 +658,27 @@ static u16 nvmet_copy_ns_identifier(struct nvmet_req *req, u8 type, off_t *off)
return 0;
}
+static u16 nvmet_execute_identify_desclist_csi(struct nvmet_req *req, off_t *o)
+{
+ u16 status;
+
+ switch (req->ns->csi) {
+ case NVME_CSI_NVM:
+ status = nvmet_copy_ns_identifier(req, NVME_NIDT_CSI, o);
+ break;
+ case NVME_CSI_ZNS:
+ if (!IS_ENABLED(CONFIG_BLK_DEV_ZONED))
+ return NVME_SC_INVALID_IO_CMD_SET;
+
+ status = nvmet_copy_ns_identifier(req, NVME_NIDT_CSI, o);
+ break;
+ default:
+ status = NVME_SC_INVALID_IO_CMD_SET;
+ }
+
+ return status;
+}
+
static void nvmet_execute_identify_desclist(struct nvmet_req *req)
{
off_t off = 0;
@@ -643,6 +699,10 @@ static void nvmet_execute_identify_desclist(struct nvmet_req *req)
goto out;
}
+ status = nvmet_execute_identify_desclist_csi(req, &off);
+ if (status)
+ goto out;
+
if (sg_zero_buffer(req->sg, req->sg_cnt, NVME_IDENTIFY_DATA_SIZE - off,
off) != NVME_IDENTIFY_DATA_SIZE - off)
status = NVME_SC_INTERNAL | NVME_SC_DNR;
diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c
index be6fcdaf51a7..7c3dee21474e 100644
--- a/drivers/nvme/target/core.c
+++ b/drivers/nvme/target/core.c
@@ -693,6 +693,7 @@ struct nvmet_ns *nvmet_ns_alloc(struct nvmet_subsys *subsys, u32 nsid)
uuid_gen(&ns->uuid);
ns->buffered_io = false;
+ ns->csi = NVME_CSI_NVM;
return ns;
}
diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h
index 4b84edb49f22..f017b1c70a49 100644
--- a/drivers/nvme/target/nvmet.h
+++ b/drivers/nvme/target/nvmet.h
@@ -81,6 +81,7 @@ struct nvmet_ns {
struct pci_dev *p2p_dev;
int pi_type;
int metadata_size;
+ u8 csi;
};
static inline struct nvmet_ns *to_nvmet_ns(struct config_item *item)
diff --git a/include/linux/nvme.h b/include/linux/nvme.h
index b08787cd0881..f09fbbb7876b 100644
--- a/include/linux/nvme.h
+++ b/include/linux/nvme.h
@@ -1494,6 +1494,7 @@ enum {
NVME_SC_NS_WRITE_PROTECTED = 0x20,
NVME_SC_CMD_INTERRUPTED = 0x21,
NVME_SC_TRANSIENT_TR_ERR = 0x22,
+ NVME_SC_INVALID_IO_CMD_SET = 0x2C,
NVME_SC_LBA_RANGE = 0x80,
NVME_SC_CAP_EXCEEDED = 0x81,
--
2.22.1
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply related [flat|nested] 25+ messages in thread
* Re: [PATCH V10 2/8] nvmet: add NVM Command Set Identifier support
2021-03-09 4:58 ` [PATCH V10 2/8] nvmet: add NVM Command Set Identifier support Chaitanya Kulkarni
@ 2021-03-09 11:37 ` Christoph Hellwig
2021-03-09 21:07 ` Chaitanya Kulkarni
0 siblings, 1 reply; 25+ messages in thread
From: Christoph Hellwig @ 2021-03-09 11:37 UTC (permalink / raw)
To: Chaitanya Kulkarni; +Cc: linux-nvme, hch, kbusch, sagi, damien.lemoal
> +static void nvmet_set_csi_nvm_effects(struct nvme_effects_log *log)
> {
> log->acs[nvme_admin_get_log_page] = cpu_to_le32(1 << 0);
> log->acs[nvme_admin_identify] = cpu_to_le32(1 << 0);
> log->acs[nvme_admin_abort_cmd] = cpu_to_le32(1 << 0);
> @@ -184,8 +177,45 @@ static void nvmet_execute_get_log_cmd_effects_ns(struct nvmet_req *req)
> log->iocs[nvme_cmd_flush] = cpu_to_le32(1 << 0);
> log->iocs[nvme_cmd_dsm] = cpu_to_le32(1 << 0);
> log->iocs[nvme_cmd_write_zeroes] = cpu_to_le32(1 << 0);
> +}
>
> +static u16 nvmet_set_csi_zns_effects(struct nvme_effects_log *log)
> +{
> + if (!IS_ENABLED(CONFIG_BLK_DEV_ZONED))
> + return NVME_SC_INVALID_IO_CMD_SET;
I'd move this check into the caller.
> + switch (req->cmd->get_log_page.csi) {
> + case NVME_CSI_NVM:
> + nvmet_set_csi_nvm_effects(log);
> + break;
> + case NVME_CSI_ZNS:
> + status = nvmet_set_csi_zns_effects(log);
> + break;
ZNS also needs a call to nvmet_set_csi_nvm_effects as it also supports
the the NVM commands. But more importantly the ZNS case does not
belong into this patch at all, but into one that actually adds ZNS
support. CSI support is just infrastructure not tied to a new command
set. Same for all the other places referencing ZNS in this patch.
> + default:
> + status = NVME_SC_INVALID_LOG_PAGE;
> + break;
> + }
> +
> + if (status == NVME_SC_SUCCESS)
> + status = nvmet_copy_to_sgl(req, 0, log, sizeof(*log));
Also I'd use a goto for the failure case instead of the check here.
the
> + switch (req->ns->csi) {
> + case NVME_CSI_NVM:
> + status = nvmet_copy_ns_identifier(req, NVME_NIDT_CSI, o);
> + break;
> + case NVME_CSI_ZNS:
> + if (!IS_ENABLED(CONFIG_BLK_DEV_ZONED))
> + return NVME_SC_INVALID_IO_CMD_SET;
> +
> + status = nvmet_copy_ns_identifier(req, NVME_NIDT_CSI, o);
> + break;
> + default:
> + status = NVME_SC_INVALID_IO_CMD_SET;
> + }
> +
> + return status;
Just return directly from inside the switch statement to simplify this
a bit.
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH V10 2/8] nvmet: add NVM Command Set Identifier support
2021-03-09 11:37 ` Christoph Hellwig
@ 2021-03-09 21:07 ` Chaitanya Kulkarni
0 siblings, 0 replies; 25+ messages in thread
From: Chaitanya Kulkarni @ 2021-03-09 21:07 UTC (permalink / raw)
To: Christoph Hellwig; +Cc: linux-nvme, kbusch, sagi, Damien Le Moal
On 3/9/21 03:37, Christoph Hellwig wrote:
>> +static void nvmet_set_csi_nvm_effects(struct nvme_effects_log *log)
>> {
>> log->acs[nvme_admin_get_log_page] = cpu_to_le32(1 << 0);
>> log->acs[nvme_admin_identify] = cpu_to_le32(1 << 0);
>> log->acs[nvme_admin_abort_cmd] = cpu_to_le32(1 << 0);
>> @@ -184,8 +177,45 @@ static void nvmet_execute_get_log_cmd_effects_ns(struct nvmet_req *req)
>> log->iocs[nvme_cmd_flush] = cpu_to_le32(1 << 0);
>> log->iocs[nvme_cmd_dsm] = cpu_to_le32(1 << 0);
>> log->iocs[nvme_cmd_write_zeroes] = cpu_to_le32(1 << 0);
>> +}
>>
>> +static u16 nvmet_set_csi_zns_effects(struct nvme_effects_log *log)
>> +{
>> + if (!IS_ENABLED(CONFIG_BLK_DEV_ZONED))
>> + return NVME_SC_INVALID_IO_CMD_SET;
> I'd move this check into the caller.
Okay.
>> + switch (req->cmd->get_log_page.csi) {
>> + case NVME_CSI_NVM:
>> + nvmet_set_csi_nvm_effects(log);
>> + break;
>> + case NVME_CSI_ZNS:
>> + status = nvmet_set_csi_zns_effects(log);
>> + break;
> ZNS also needs a call to nvmet_set_csi_nvm_effects as it also supports
> the the NVM commands. But more importantly the ZNS case does not
> belong into this patch at all, but into one that actually adds ZNS
> support. CSI support is just infrastructure not tied to a new command
> set. Same for all the other places referencing ZNS in this patch.
>
Okay, I'll move all the ZNS related code to the ZNS backend patch.
>> + default:
>> + status = NVME_SC_INVALID_LOG_PAGE;
>> + break;
>> + }
>> +
>> + if (status == NVME_SC_SUCCESS)
>> + status = nvmet_copy_to_sgl(req, 0, log, sizeof(*log));
> Also I'd use a goto for the failure case instead of the check here.
> the
Okay.
>> + switch (req->ns->csi) {
>> + case NVME_CSI_NVM:
>> + status = nvmet_copy_ns_identifier(req, NVME_NIDT_CSI, o);
>> + break;
>> + case NVME_CSI_ZNS:
>> + if (!IS_ENABLED(CONFIG_BLK_DEV_ZONED))
>> + return NVME_SC_INVALID_IO_CMD_SET;
>> +
>> + status = nvmet_copy_ns_identifier(req, NVME_NIDT_CSI, o);
>> + break;
>> + default:
>> + status = NVME_SC_INVALID_IO_CMD_SET;
>> + }
>> +
>> + return status;
> Just return directly from inside the switch statement to simplify this
> a bit.
>
Make sense.
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply [flat|nested] 25+ messages in thread
* [PATCH V10 3/8] nvmet: add command set supported ctrl cap
2021-03-09 4:58 [PATCH V10 0/8] nvmet: add ZBD backend support Chaitanya Kulkarni
2021-03-09 4:58 ` [PATCH V10 1/8] nvmet: trim args for nvmet_copy_ns_identifier() Chaitanya Kulkarni
2021-03-09 4:58 ` [PATCH V10 2/8] nvmet: add NVM Command Set Identifier support Chaitanya Kulkarni
@ 2021-03-09 4:58 ` Chaitanya Kulkarni
2021-03-09 11:38 ` Christoph Hellwig
2021-03-10 7:05 ` Chaitanya Kulkarni
2021-03-09 4:58 ` [PATCH V10 4/8] nvmet: add ZBD over ZNS backend support Chaitanya Kulkarni
` (4 subsequent siblings)
7 siblings, 2 replies; 25+ messages in thread
From: Chaitanya Kulkarni @ 2021-03-09 4:58 UTC (permalink / raw)
To: linux-nvme; +Cc: hch, kbusch, sagi, damien.lemoal, Chaitanya Kulkarni
Update the ctrl->cap register export the Multiple Command Set Supported
features support.
Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
---
drivers/nvme/target/core.c | 15 ++++++++++++++-
1 file changed, 14 insertions(+), 1 deletion(-)
diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c
index 7c3dee21474e..0eb171388438 100644
--- a/drivers/nvme/target/core.c
+++ b/drivers/nvme/target/core.c
@@ -1115,6 +1115,17 @@ static inline u8 nvmet_cc_iocqes(u32 cc)
return (cc >> NVME_CC_IOCQES_SHIFT) & 0xf;
}
+static inline bool nvmet_cc_css_check(u8 cc_css)
+{
+ switch (cc_css <<= NVME_CC_CSS_SHIFT) {
+ case NVME_CC_CSS_NVM:
+ case NVME_CC_CSS_CSI:
+ return true;
+ default:
+ return false;
+ }
+}
+
static void nvmet_start_ctrl(struct nvmet_ctrl *ctrl)
{
lockdep_assert_held(&ctrl->lock);
@@ -1123,7 +1134,7 @@ static void nvmet_start_ctrl(struct nvmet_ctrl *ctrl)
nvmet_cc_iocqes(ctrl->cc) != NVME_NVM_IOCQES ||
nvmet_cc_mps(ctrl->cc) != 0 ||
nvmet_cc_ams(ctrl->cc) != 0 ||
- nvmet_cc_css(ctrl->cc) != 0) {
+ !nvmet_cc_css_check(nvmet_cc_css(ctrl->cc))) {
ctrl->csts = NVME_CSTS_CFS;
return;
}
@@ -1174,6 +1185,8 @@ static void nvmet_init_cap(struct nvmet_ctrl *ctrl)
{
/* command sets supported: NVMe command set: */
ctrl->cap = (1ULL << 37);
+ if (IS_ENABLED(CONFIG_BLK_DEV_ZONED))
+ ctrl->cap |= (1ULL << 43);
/* CC.EN timeout in 500msec units: */
ctrl->cap |= (15ULL << 24);
/* maximum queue entries supported: */
--
2.22.1
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply related [flat|nested] 25+ messages in thread
* Re: [PATCH V10 3/8] nvmet: add command set supported ctrl cap
2021-03-09 4:58 ` [PATCH V10 3/8] nvmet: add command set supported ctrl cap Chaitanya Kulkarni
@ 2021-03-09 11:38 ` Christoph Hellwig
2021-03-09 21:09 ` Chaitanya Kulkarni
2021-03-10 7:05 ` Chaitanya Kulkarni
1 sibling, 1 reply; 25+ messages in thread
From: Christoph Hellwig @ 2021-03-09 11:38 UTC (permalink / raw)
To: Chaitanya Kulkarni; +Cc: linux-nvme, hch, kbusch, sagi, damien.lemoal
On Mon, Mar 08, 2021 at 08:58:18PM -0800, Chaitanya Kulkarni wrote:
> Update the ctrl->cap register export the Multiple Command Set Supported
> features support.
>
> Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
Looks good, but this is part of the CSI support, so it should be merged
into the previous patch.
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH V10 3/8] nvmet: add command set supported ctrl cap
2021-03-09 11:38 ` Christoph Hellwig
@ 2021-03-09 21:09 ` Chaitanya Kulkarni
0 siblings, 0 replies; 25+ messages in thread
From: Chaitanya Kulkarni @ 2021-03-09 21:09 UTC (permalink / raw)
To: Christoph Hellwig; +Cc: linux-nvme, kbusch, sagi, Damien Le Moal
On 3/9/21 03:38, Christoph Hellwig wrote:
> On Mon, Mar 08, 2021 at 08:58:18PM -0800, Chaitanya Kulkarni wrote:
>> Update the ctrl->cap register export the Multiple Command Set Supported
>> features support.
>>
>> Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
> Looks good, but this is part of the CSI support, so it should be merged
> into the previous patch.
>
I'll merge previous patch and this one minus the ZNS bits.
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH V10 3/8] nvmet: add command set supported ctrl cap
2021-03-09 4:58 ` [PATCH V10 3/8] nvmet: add command set supported ctrl cap Chaitanya Kulkarni
2021-03-09 11:38 ` Christoph Hellwig
@ 2021-03-10 7:05 ` Chaitanya Kulkarni
2021-03-10 7:14 ` hch
1 sibling, 1 reply; 25+ messages in thread
From: Chaitanya Kulkarni @ 2021-03-10 7:05 UTC (permalink / raw)
To: linux-nvme; +Cc: hch, kbusch, sagi, Damien Le Moal
Christoph,
On 3/8/21 20:58, Chaitanya Kulkarni wrote:
> @@ -1174,6 +1185,8 @@ static void nvmet_init_cap(struct nvmet_ctrl *ctrl)
> {
> /* command sets supported: NVMe command set: */
> ctrl->cap = (1ULL << 37);
> + if (IS_ENABLED(CONFIG_BLK_DEV_ZONED))
> + ctrl->cap |= (1ULL << 43);
> /* CC.EN timeout in 500msec units: */
> ctrl->cap |= (15ULL << 24);
> /* maximum queue entries supported: */
> -- 2.22.1
I've completed all the changes from your comment except this hunk.
Based on your commmet it needs to be merged with patch
"nvmet: add NVM Command Set Identifier support",
but since it is based on the BLK_DEV_ZONED I think it should be a part of
"nvmet: add ZBD over ZNS backend support" patch, any advise?
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH V10 3/8] nvmet: add command set supported ctrl cap
2021-03-10 7:05 ` Chaitanya Kulkarni
@ 2021-03-10 7:14 ` hch
0 siblings, 0 replies; 25+ messages in thread
From: hch @ 2021-03-10 7:14 UTC (permalink / raw)
To: Chaitanya Kulkarni; +Cc: linux-nvme, hch, kbusch, sagi, Damien Le Moal
On Wed, Mar 10, 2021 at 07:05:14AM +0000, Chaitanya Kulkarni wrote:
> Christoph,
>
> On 3/8/21 20:58, Chaitanya Kulkarni wrote:
> > @@ -1174,6 +1185,8 @@ static void nvmet_init_cap(struct nvmet_ctrl *ctrl)
> > {
> > /* command sets supported: NVMe command set: */
> > ctrl->cap = (1ULL << 37);
> > + if (IS_ENABLED(CONFIG_BLK_DEV_ZONED))
> > + ctrl->cap |= (1ULL << 43);
> > /* CC.EN timeout in 500msec units: */
> > ctrl->cap |= (15ULL << 24);
> > /* maximum queue entries supported: */
> > -- 2.22.1
>
> I've completed all the changes from your comment except this hunk.
>
> Based on your commmet it needs to be merged with patch
> "nvmet: add NVM Command Set Identifier support",
> but since it is based on the BLK_DEV_ZONED I think it should be a part of
> "nvmet: add ZBD over ZNS backend support" patch, any advise?
Just drop the conditional. There is no harm in just always supporting
multiple command sets, even if only the NVM command set is advertised.
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply [flat|nested] 25+ messages in thread
* [PATCH V10 4/8] nvmet: add ZBD over ZNS backend support
2021-03-09 4:58 [PATCH V10 0/8] nvmet: add ZBD backend support Chaitanya Kulkarni
` (2 preceding siblings ...)
2021-03-09 4:58 ` [PATCH V10 3/8] nvmet: add command set supported ctrl cap Chaitanya Kulkarni
@ 2021-03-09 4:58 ` Chaitanya Kulkarni
2021-03-09 11:41 ` Christoph Hellwig
2021-03-09 4:58 ` [PATCH V10 5/8] nvmet: add nvmet_req_bio put helper for backends Chaitanya Kulkarni
` (3 subsequent siblings)
7 siblings, 1 reply; 25+ messages in thread
From: Chaitanya Kulkarni @ 2021-03-09 4:58 UTC (permalink / raw)
To: linux-nvme; +Cc: hch, kbusch, sagi, damien.lemoal, Chaitanya Kulkarni
NVMe TP 4053 – Zoned Namespaces (ZNS) allows host software to
communicate with a non-volatile memory subsystem using zones for
NVMe protocol based controllers. NVMeOF already support the ZNS NVMe
Protocol compliant devices on the target in the passthru mode. There
are Generic zoned block devices like Shingled Magnetic Recording (SMR)
HDDs that are not based on the NVMe protocol.
This patch adds ZNS backend to support the ZBDs for NVMeOF target.
This support includes implementing the new command set NVME_CSI_ZNS,
adding different command handlers for ZNS command set such as
NVMe Identify Controller, NVMe Identify Namespace, NVMe Zone Append,
NVMe Zone Management Send and NVMe Zone Management Receive.
With new command set identifier we also update the target command effects
logs to reflect the ZNS compliant commands.
Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
---
drivers/nvme/target/Makefile | 1 +
drivers/nvme/target/admin-cmd.c | 8 +
drivers/nvme/target/io-cmd-bdev.c | 33 +++-
drivers/nvme/target/nvmet.h | 38 ++++
drivers/nvme/target/zns.c | 308 ++++++++++++++++++++++++++++++
5 files changed, 380 insertions(+), 8 deletions(-)
create mode 100644 drivers/nvme/target/zns.c
diff --git a/drivers/nvme/target/Makefile b/drivers/nvme/target/Makefile
index ebf91fc4c72e..9837e580fa7e 100644
--- a/drivers/nvme/target/Makefile
+++ b/drivers/nvme/target/Makefile
@@ -12,6 +12,7 @@ obj-$(CONFIG_NVME_TARGET_TCP) += nvmet-tcp.o
nvmet-y += core.o configfs.o admin-cmd.o fabrics-cmd.o \
discovery.o io-cmd-file.o io-cmd-bdev.o
nvmet-$(CONFIG_NVME_TARGET_PASSTHRU) += passthru.o
+nvmet-$(CONFIG_BLK_DEV_ZONED) += zns.o
nvme-loop-y += loop.o
nvmet-rdma-y += rdma.o
nvmet-fc-y += fc.o
diff --git a/drivers/nvme/target/admin-cmd.c b/drivers/nvme/target/admin-cmd.c
index 76d9fc0f48c2..05ef362d70b7 100644
--- a/drivers/nvme/target/admin-cmd.c
+++ b/drivers/nvme/target/admin-cmd.c
@@ -719,8 +719,16 @@ static void nvmet_execute_identify(struct nvmet_req *req)
switch (req->cmd->identify.cns) {
case NVME_ID_CNS_NS:
return nvmet_execute_identify_ns(req);
+ case NVME_ID_CNS_CS_NS:
+ if (req->cmd->identify.csi == NVME_CSI_ZNS)
+ return nvmet_execute_identify_cns_cs_ns(req);
+ break;
case NVME_ID_CNS_CTRL:
return nvmet_execute_identify_ctrl(req);
+ case NVME_ID_CNS_CS_CTRL:
+ if (req->cmd->identify.csi == NVME_CSI_ZNS)
+ return nvmet_execute_identify_cns_cs_ctrl(req);
+ break;
case NVME_ID_CNS_NS_ACTIVE_LIST:
return nvmet_execute_identify_nslist(req);
case NVME_ID_CNS_NS_DESC_LIST:
diff --git a/drivers/nvme/target/io-cmd-bdev.c b/drivers/nvme/target/io-cmd-bdev.c
index 9a8b3726a37c..394090bf75a6 100644
--- a/drivers/nvme/target/io-cmd-bdev.c
+++ b/drivers/nvme/target/io-cmd-bdev.c
@@ -63,6 +63,14 @@ static void nvmet_bdev_ns_enable_integrity(struct nvmet_ns *ns)
}
}
+void nvmet_bdev_ns_disable(struct nvmet_ns *ns)
+{
+ if (ns->bdev) {
+ blkdev_put(ns->bdev, FMODE_WRITE | FMODE_READ);
+ ns->bdev = NULL;
+ }
+}
+
int nvmet_bdev_ns_enable(struct nvmet_ns *ns)
{
int ret;
@@ -86,15 +94,15 @@ int nvmet_bdev_ns_enable(struct nvmet_ns *ns)
if (IS_ENABLED(CONFIG_BLK_DEV_INTEGRITY_T10))
nvmet_bdev_ns_enable_integrity(ns);
- return 0;
-}
-
-void nvmet_bdev_ns_disable(struct nvmet_ns *ns)
-{
- if (ns->bdev) {
- blkdev_put(ns->bdev, FMODE_WRITE | FMODE_READ);
- ns->bdev = NULL;
+ if (bdev_is_zoned(ns->bdev)) {
+ if (!nvmet_bdev_zns_enable(ns)) {
+ nvmet_bdev_ns_disable(ns);
+ return -EINVAL;
+ }
+ ns->csi = NVME_CSI_ZNS;
}
+
+ return 0;
}
void nvmet_bdev_ns_revalidate(struct nvmet_ns *ns)
@@ -448,6 +456,15 @@ u16 nvmet_bdev_parse_io_cmd(struct nvmet_req *req)
case nvme_cmd_write_zeroes:
req->execute = nvmet_bdev_execute_write_zeroes;
return 0;
+ case nvme_cmd_zone_append:
+ req->execute = nvmet_bdev_execute_zone_append;
+ return 0;
+ case nvme_cmd_zone_mgmt_recv:
+ req->execute = nvmet_bdev_execute_zone_mgmt_recv;
+ return 0;
+ case nvme_cmd_zone_mgmt_send:
+ req->execute = nvmet_bdev_execute_zone_mgmt_send;
+ return 0;
default:
return nvmet_report_invalid_opcode(req);
}
diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h
index f017b1c70a49..cf592035e569 100644
--- a/drivers/nvme/target/nvmet.h
+++ b/drivers/nvme/target/nvmet.h
@@ -247,6 +247,10 @@ struct nvmet_subsys {
unsigned int admin_timeout;
unsigned int io_timeout;
#endif /* CONFIG_NVME_TARGET_PASSTHRU */
+
+#ifdef CONFIG_BLK_DEV_ZONED
+ u8 zasl;
+#endif /* CONFIG_BLK_DEV_ZONED */
};
static inline struct nvmet_subsys *to_subsys(struct config_item *item)
@@ -583,6 +587,40 @@ static inline struct nvme_ctrl *nvmet_passthru_ctrl(struct nvmet_subsys *subsys)
}
#endif /* CONFIG_NVME_TARGET_PASSTHRU */
+#ifdef CONFIG_BLK_DEV_ZONED
+bool nvmet_bdev_zns_enable(struct nvmet_ns *ns);
+void nvmet_execute_identify_cns_cs_ctrl(struct nvmet_req *req);
+void nvmet_execute_identify_cns_cs_ns(struct nvmet_req *req);
+void nvmet_bdev_execute_zone_mgmt_recv(struct nvmet_req *req);
+void nvmet_bdev_execute_zone_mgmt_send(struct nvmet_req *req);
+void nvmet_bdev_execute_zone_append(struct nvmet_req *req);
+#else /* CONFIG_BLK_DEV_ZONED */
+static inline bool nvmet_bdev_zns_enable(struct nvmet_ns *ns)
+{
+ return false;
+}
+static inline void
+nvmet_execute_identify_cns_cs_ctrl(struct nvmet_req *req)
+{
+}
+static inline void
+nvmet_execute_identify_cns_cs_ns(struct nvmet_req *req)
+{
+}
+static inline void
+nvmet_bdev_execute_zone_mgmt_recv(struct nvmet_req *req)
+{
+}
+static inline void
+nvmet_bdev_execute_zone_mgmt_send(struct nvmet_req *req)
+{
+}
+static inline void
+nvmet_bdev_execute_zone_append(struct nvmet_req *req)
+{
+}
+#endif /* CONFIG_BLK_DEV_ZONED */
+
static inline struct nvme_ctrl *
nvmet_req_passthru_ctrl(struct nvmet_req *req)
{
diff --git a/drivers/nvme/target/zns.c b/drivers/nvme/target/zns.c
new file mode 100644
index 000000000000..96bb1b041105
--- /dev/null
+++ b/drivers/nvme/target/zns.c
@@ -0,0 +1,308 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * NVMe ZNS-ZBD command implementation.
+ * Copyright (c) 2020-2021 HGST, a Western Digital Company.
+ */
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+#include <linux/nvme.h>
+#include <linux/blkdev.h>
+#include "nvmet.h"
+
+/*
+ * We set the Memory Page Size Minimum (MPSMIN) for target controller to 0
+ * which gets added by 12 in the nvme_enable_ctrl() which results in 2^12 = 4k
+ * as page_shift value. When calculating the ZASL use shift by 12.
+ */
+#define NVMET_MPSMIN_SHIFT 12
+
+static u16 nvmet_bdev_zns_checks(struct nvmet_req *req)
+{
+ if (!bdev_is_zoned(req->ns->bdev))
+ return NVME_SC_INVALID_NS | NVME_SC_DNR;
+
+ if (req->cmd->zmr.zra != NVME_ZRA_ZONE_REPORT)
+ return NVME_SC_INVALID_FIELD;
+
+ if (req->cmd->zmr.zrasf != NVME_ZRASF_ZONE_REPORT_ALL)
+ return NVME_SC_INVALID_FIELD;
+
+ if (req->cmd->zmr.pr != NVME_REPORT_ZONE_PARTIAL)
+ return NVME_SC_INVALID_FIELD;
+
+ return NVME_SC_SUCCESS;
+}
+
+static inline u8 nvmet_zasl(unsigned int zone_append_sects)
+{
+ /*
+ * Zone Append Size Limit is the value experessed in the units
+ * of minimum memory page size (i.e. 12) and is reported power of 2.
+ */
+ return ilog2((zone_append_sects << 9) >> NVMET_MPSMIN_SHIFT);
+}
+
+static inline bool nvmet_zns_update_zasl(struct nvmet_ns *ns)
+{
+ struct request_queue *q = ns->bdev->bd_disk->queue;
+ u8 zasl = nvmet_zasl(queue_max_zone_append_sectors(q));
+
+ if (ns->subsys->zasl)
+ return ns->subsys->zasl < zasl ? false : true;
+
+ ns->subsys->zasl = zasl;
+ return true;
+}
+
+static int nvmet_bdev_validate_zns_zones_cb(struct blk_zone *z,
+ unsigned int i, void *data)
+{
+ if (z->type == BLK_ZONE_TYPE_CONVENTIONAL)
+ return -EOPNOTSUPP;
+ return 0;
+}
+
+static bool nvmet_bdev_has_conv_zones(struct block_device *bdev)
+{
+ int ret;
+
+ if (bdev->bd_disk->queue->conv_zones_bitmap)
+ return true;
+
+ ret = blkdev_report_zones(bdev, 0, blkdev_nr_zones(bdev->bd_disk),
+ nvmet_bdev_validate_zns_zones_cb, NULL);
+
+ return ret <= 0 ? true : false;
+}
+
+bool nvmet_bdev_zns_enable(struct nvmet_ns *ns)
+{
+ if (nvmet_bdev_has_conv_zones(ns->bdev))
+ return false;
+
+ ns->blksize_shift = blksize_bits(bdev_physical_block_size(ns->bdev));
+
+ if (!nvmet_zns_update_zasl(ns))
+ return false;
+
+ return !(get_capacity(ns->bdev->bd_disk) &
+ (bdev_zone_sectors(ns->bdev) - 1));
+}
+
+void nvmet_execute_identify_cns_cs_ctrl(struct nvmet_req *req)
+{
+ u8 zasl = req->sq->ctrl->subsys->zasl;
+ struct nvmet_ctrl *ctrl = req->sq->ctrl;
+ struct nvme_id_ctrl_zns *id;
+ u16 status;
+
+ id = kzalloc(sizeof(*id), GFP_KERNEL);
+ if (!id) {
+ status = NVME_SC_INTERNAL;
+ goto out;
+ }
+
+ if (ctrl->ops->get_mdts)
+ id->zasl = min_t(u8, ctrl->ops->get_mdts(ctrl), zasl);
+ else
+ id->zasl = zasl;
+
+ status = nvmet_copy_to_sgl(req, 0, id, sizeof(*id));
+
+ kfree(id);
+out:
+ nvmet_req_complete(req, status);
+}
+
+void nvmet_execute_identify_cns_cs_ns(struct nvmet_req *req)
+{
+ struct nvme_id_ns_zns *id_zns;
+ u64 zsze;
+ u16 status;
+
+ if (le32_to_cpu(req->cmd->identify.nsid) == NVME_NSID_ALL) {
+ req->error_loc = offsetof(struct nvme_identify, nsid);
+ status = NVME_SC_INVALID_NS | NVME_SC_DNR;
+ goto out;
+ }
+
+ id_zns = kzalloc(sizeof(*id_zns), GFP_KERNEL);
+ if (!id_zns) {
+ status = NVME_SC_INTERNAL;
+ goto out;
+ }
+
+ status = nvmet_req_find_ns(req);
+ if (status) {
+ status = NVME_SC_INTERNAL;
+ goto done;
+ }
+
+ if (!bdev_is_zoned(req->ns->bdev)) {
+ req->error_loc = offsetof(struct nvme_identify, nsid);
+ status = NVME_SC_INVALID_NS | NVME_SC_DNR;
+ goto done;
+ }
+
+ nvmet_ns_revalidate(req->ns);
+ zsze = (bdev_zone_sectors(req->ns->bdev) << 9) >>
+ req->ns->blksize_shift;
+ id_zns->lbafe[0].zsze = cpu_to_le64(zsze);
+ id_zns->mor = cpu_to_le32(bdev_max_open_zones(req->ns->bdev));
+ id_zns->mar = cpu_to_le32(bdev_max_active_zones(req->ns->bdev));
+
+done:
+ status = nvmet_copy_to_sgl(req, 0, id_zns, sizeof(*id_zns));
+ kfree(id_zns);
+out:
+ nvmet_req_complete(req, status);
+}
+
+struct nvmet_report_zone_data {
+ struct nvmet_ns *ns;
+ struct nvme_zone_report *rz;
+};
+
+static int nvmet_bdev_report_zone_cb(struct blk_zone *z, unsigned i, void *d)
+{
+ struct nvmet_report_zone_data *report_zone_data = d;
+ struct nvme_zone_descriptor *entries = report_zone_data->rz->entries;
+ struct nvmet_ns *ns = report_zone_data->ns;
+
+ entries[i].zcap = nvmet_sect_to_lba(ns, z->capacity);
+ entries[i].zslba = nvmet_sect_to_lba(ns, z->start);
+ entries[i].wp = nvmet_sect_to_lba(ns, z->wp);
+ entries[i].za = z->reset ? 1 << 2 : 0;
+ entries[i].zt = z->type;
+ entries[i].zs = z->cond << 4;
+
+ return 0;
+}
+
+void nvmet_bdev_execute_zone_mgmt_recv(struct nvmet_req *req)
+{
+ sector_t sect = nvmet_lba_to_sect(req->ns, req->cmd->zmr.slba);
+ u32 bufsize = (le32_to_cpu(req->cmd->zmr.numd) + 1) << 2;
+ struct nvmet_report_zone_data data = { .ns = req->ns };
+ unsigned int nr_zones;
+ int reported_zones;
+ u16 status;
+
+ status = nvmet_bdev_zns_checks(req);
+ if (status)
+ goto out;
+
+ data.rz = __vmalloc(bufsize, GFP_KERNEL | __GFP_NORETRY | __GFP_ZERO);
+ if (!data.rz) {
+ status = NVME_SC_INTERNAL;
+ goto out;
+ }
+
+ nr_zones = (bufsize - sizeof(struct nvme_zone_report)) /
+ sizeof(struct nvme_zone_descriptor);
+ reported_zones = blkdev_report_zones(req->ns->bdev, sect, nr_zones,
+ nvmet_bdev_report_zone_cb, &data);
+ if (reported_zones < 0) {
+ status = NVME_SC_INTERNAL;
+ goto out_free_report_zones;
+ }
+
+ data.rz->nr_zones = cpu_to_le64(reported_zones);
+
+ status = nvmet_copy_to_sgl(req, 0, data.rz, bufsize);
+
+out_free_report_zones:
+ kvfree(data.rz);
+out:
+ nvmet_req_complete(req, status);
+}
+
+void nvmet_bdev_execute_zone_mgmt_send(struct nvmet_req *req)
+{
+ sector_t sect = nvmet_lba_to_sect(req->ns, req->cmd->zms.slba);
+ sector_t nr_sect = bdev_zone_sectors(req->ns->bdev);
+ u16 status = NVME_SC_SUCCESS;
+ u8 zsa = req->cmd->zms.zsa;
+ enum req_opf op;
+ int ret;
+ const unsigned int zsa_to_op[] = {
+ [NVME_ZONE_OPEN] = REQ_OP_ZONE_OPEN,
+ [NVME_ZONE_CLOSE] = REQ_OP_ZONE_CLOSE,
+ [NVME_ZONE_FINISH] = REQ_OP_ZONE_FINISH,
+ [NVME_ZONE_RESET] = REQ_OP_ZONE_RESET,
+ };
+
+ if (zsa > ARRAY_SIZE(zsa_to_op) || !zsa_to_op[zsa]) {
+ status = NVME_SC_INVALID_FIELD;
+ goto out;
+ }
+
+ op = zsa_to_op[zsa];
+
+ if (req->cmd->zms.select_all)
+ nr_sect = get_capacity(req->ns->bdev->bd_disk);
+
+ ret = blkdev_zone_mgmt(req->ns->bdev, op, sect, nr_sect, GFP_KERNEL);
+ if (ret)
+ status = NVME_SC_INTERNAL;
+out:
+ nvmet_req_complete(req, status);
+}
+
+void nvmet_bdev_execute_zone_append(struct nvmet_req *req)
+{
+ sector_t sect = nvmet_lba_to_sect(req->ns, req->cmd->rw.slba);
+ u16 status = NVME_SC_SUCCESS;
+ unsigned int total_len = 0;
+ struct scatterlist *sg;
+ int ret = 0, sg_cnt;
+ struct bio *bio;
+
+ if (!nvmet_check_transfer_len(req, nvmet_rw_data_len(req)))
+ return;
+
+ if (!req->sg_cnt) {
+ nvmet_req_complete(req, 0);
+ return;
+ }
+
+ if (req->transfer_len <= NVMET_MAX_INLINE_DATA_LEN) {
+ bio = &req->b.inline_bio;
+ bio_init(bio, req->inline_bvec, ARRAY_SIZE(req->inline_bvec));
+ } else {
+ bio = bio_alloc(GFP_KERNEL, req->sg_cnt);
+ }
+
+ bio_set_dev(bio, req->ns->bdev);
+ bio->bi_iter.bi_sector = sect;
+ bio->bi_opf = REQ_OP_ZONE_APPEND | REQ_SYNC | REQ_IDLE;
+ if (req->cmd->rw.control & cpu_to_le16(NVME_RW_FUA))
+ bio->bi_opf |= REQ_FUA;
+
+ for_each_sg(req->sg, sg, req->sg_cnt, sg_cnt) {
+ struct page *p = sg_page(sg);
+ unsigned int l = sg->length;
+ unsigned int o = sg->offset;
+
+ ret = bio_add_zone_append_page(bio, p, l, o);
+ if (ret != sg->length) {
+ status = NVME_SC_INTERNAL;
+ goto out_bio_put;
+ }
+
+ total_len += sg->length;
+ }
+
+ if (total_len != nvmet_rw_data_len(req)) {
+ status = NVME_SC_INTERNAL | NVME_SC_DNR;
+ goto out_bio_put;
+ }
+
+ ret = submit_bio_wait(bio);
+ req->cqe->result.u64 = nvmet_sect_to_lba(req->ns,
+ bio->bi_iter.bi_sector);
+
+out_bio_put:
+ if (bio != &req->b.inline_bio)
+ bio_put(bio);
+ nvmet_req_complete(req, ret < 0 ? NVME_SC_INTERNAL : status);
+}
--
2.22.1
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply related [flat|nested] 25+ messages in thread
* Re: [PATCH V10 4/8] nvmet: add ZBD over ZNS backend support
2021-03-09 4:58 ` [PATCH V10 4/8] nvmet: add ZBD over ZNS backend support Chaitanya Kulkarni
@ 2021-03-09 11:41 ` Christoph Hellwig
2021-03-09 21:13 ` Chaitanya Kulkarni
0 siblings, 1 reply; 25+ messages in thread
From: Christoph Hellwig @ 2021-03-09 11:41 UTC (permalink / raw)
To: Chaitanya Kulkarni; +Cc: linux-nvme, hch, kbusch, sagi, damien.lemoal
On Mon, Mar 08, 2021 at 08:58:19PM -0800, Chaitanya Kulkarni wrote:
> NVMe TP 4053 – Zoned Namespaces (ZNS) allows host software to
> communicate with a non-volatile memory subsystem using zones for
> NVMe protocol based controllers. NVMeOF already support the ZNS NVMe
> Protocol compliant devices on the target in the passthru mode. There
> are Generic zoned block devices like Shingled Magnetic Recording (SMR)
> HDDs that are not based on the NVMe protocol.
>
> This patch adds ZNS backend to support the ZBDs for NVMeOF target.
>
> This support includes implementing the new command set NVME_CSI_ZNS,
> adding different command handlers for ZNS command set such as
> NVMe Identify Controller, NVMe Identify Namespace, NVMe Zone Append,
> NVMe Zone Management Send and NVMe Zone Management Receive.
>
> With new command set identifier we also update the target command effects
> logs to reflect the ZNS compliant commands.
>
> Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
> ---
> drivers/nvme/target/Makefile | 1 +
> drivers/nvme/target/admin-cmd.c | 8 +
> drivers/nvme/target/io-cmd-bdev.c | 33 +++-
> drivers/nvme/target/nvmet.h | 38 ++++
> drivers/nvme/target/zns.c | 308 ++++++++++++++++++++++++++++++
> 5 files changed, 380 insertions(+), 8 deletions(-)
> create mode 100644 drivers/nvme/target/zns.c
>
> diff --git a/drivers/nvme/target/Makefile b/drivers/nvme/target/Makefile
> index ebf91fc4c72e..9837e580fa7e 100644
> --- a/drivers/nvme/target/Makefile
> +++ b/drivers/nvme/target/Makefile
> @@ -12,6 +12,7 @@ obj-$(CONFIG_NVME_TARGET_TCP) += nvmet-tcp.o
> nvmet-y += core.o configfs.o admin-cmd.o fabrics-cmd.o \
> discovery.o io-cmd-file.o io-cmd-bdev.o
> nvmet-$(CONFIG_NVME_TARGET_PASSTHRU) += passthru.o
> +nvmet-$(CONFIG_BLK_DEV_ZONED) += zns.o
> nvme-loop-y += loop.o
> nvmet-rdma-y += rdma.o
> nvmet-fc-y += fc.o
> diff --git a/drivers/nvme/target/admin-cmd.c b/drivers/nvme/target/admin-cmd.c
> index 76d9fc0f48c2..05ef362d70b7 100644
> --- a/drivers/nvme/target/admin-cmd.c
> +++ b/drivers/nvme/target/admin-cmd.c
> @@ -719,8 +719,16 @@ static void nvmet_execute_identify(struct nvmet_req *req)
> switch (req->cmd->identify.cns) {
> case NVME_ID_CNS_NS:
> return nvmet_execute_identify_ns(req);
> + case NVME_ID_CNS_CS_NS:
> + if (req->cmd->identify.csi == NVME_CSI_ZNS)
> + return nvmet_execute_identify_cns_cs_ns(req);
> + break;
> case NVME_ID_CNS_CTRL:
> return nvmet_execute_identify_ctrl(req);
> + case NVME_ID_CNS_CS_CTRL:
> + if (req->cmd->identify.csi == NVME_CSI_ZNS)
> + return nvmet_execute_identify_cns_cs_ctrl(req);
> + break;
I still find the way how this mixes the CNS and CSI fields in a single
switch weird. I'd expect an outer switch for CNS and an inner one for
the CSI value.
> +static u16 nvmet_bdev_zns_checks(struct nvmet_req *req)
> +{
> + if (!bdev_is_zoned(req->ns->bdev))
> + return NVME_SC_INVALID_NS | NVME_SC_DNR;
> +
> + if (req->cmd->zmr.zra != NVME_ZRA_ZONE_REPORT)
> + return NVME_SC_INVALID_FIELD;
> +
> + if (req->cmd->zmr.zrasf != NVME_ZRASF_ZONE_REPORT_ALL)
> + return NVME_SC_INVALID_FIELD;
> +
> + if (req->cmd->zmr.pr != NVME_REPORT_ZONE_PARTIAL)
> + return NVME_SC_INVALID_FIELD;
error_loc information would be useful here.
> +static inline bool nvmet_zns_update_zasl(struct nvmet_ns *ns)
> +{
> + struct request_queue *q = ns->bdev->bd_disk->queue;
> + u8 zasl = nvmet_zasl(queue_max_zone_append_sectors(q));
> +
> + if (ns->subsys->zasl)
> + return ns->subsys->zasl < zasl ? false : true;
No need for the tenary expression here, the comparism will autoconvert to
bool.
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH V10 4/8] nvmet: add ZBD over ZNS backend support
2021-03-09 11:41 ` Christoph Hellwig
@ 2021-03-09 21:13 ` Chaitanya Kulkarni
0 siblings, 0 replies; 25+ messages in thread
From: Chaitanya Kulkarni @ 2021-03-09 21:13 UTC (permalink / raw)
To: Christoph Hellwig; +Cc: linux-nvme, kbusch, sagi, Damien Le Moal
On 3/9/21 03:41, Christoph Hellwig wrote:
> On Mon, Mar 08, 2021 at 08:58:19PM -0800, Chaitanya Kulkarni wrote:
>> NVMe TP 4053 – Zoned Namespaces (ZNS) allows host software to
>> communicate with a non-volatile memory subsystem using zones for
>> NVMe protocol based controllers. NVMeOF already support the ZNS NVMe
>> Protocol compliant devices on the target in the passthru mode. There
>> are Generic zoned block devices like Shingled Magnetic Recording (SMR)
>> HDDs that are not based on the NVMe protocol.
>>
>> This patch adds ZNS backend to support the ZBDs for NVMeOF target.
>>
>> This support includes implementing the new command set NVME_CSI_ZNS,
>> adding different command handlers for ZNS command set such as
>> NVMe Identify Controller, NVMe Identify Namespace, NVMe Zone Append,
>> NVMe Zone Management Send and NVMe Zone Management Receive.
>>
>> With new command set identifier we also update the target command effects
>> logs to reflect the ZNS compliant commands.
>>
>> Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
>> ---
>> drivers/nvme/target/Makefile | 1 +
>> drivers/nvme/target/admin-cmd.c | 8 +
>> drivers/nvme/target/io-cmd-bdev.c | 33 +++-
>> drivers/nvme/target/nvmet.h | 38 ++++
>> drivers/nvme/target/zns.c | 308 ++++++++++++++++++++++++++++++
>> 5 files changed, 380 insertions(+), 8 deletions(-)
>> create mode 100644 drivers/nvme/target/zns.c
>>
>> diff --git a/drivers/nvme/target/Makefile b/drivers/nvme/target/Makefile
>> index ebf91fc4c72e..9837e580fa7e 100644
>> --- a/drivers/nvme/target/Makefile
>> +++ b/drivers/nvme/target/Makefile
>> @@ -12,6 +12,7 @@ obj-$(CONFIG_NVME_TARGET_TCP) += nvmet-tcp.o
>> nvmet-y += core.o configfs.o admin-cmd.o fabrics-cmd.o \
>> discovery.o io-cmd-file.o io-cmd-bdev.o
>> nvmet-$(CONFIG_NVME_TARGET_PASSTHRU) += passthru.o
>> +nvmet-$(CONFIG_BLK_DEV_ZONED) += zns.o
>> nvme-loop-y += loop.o
>> nvmet-rdma-y += rdma.o
>> nvmet-fc-y += fc.o
>> diff --git a/drivers/nvme/target/admin-cmd.c b/drivers/nvme/target/admin-cmd.c
>> index 76d9fc0f48c2..05ef362d70b7 100644
>> --- a/drivers/nvme/target/admin-cmd.c
>> +++ b/drivers/nvme/target/admin-cmd.c
>> @@ -719,8 +719,16 @@ static void nvmet_execute_identify(struct nvmet_req *req)
>> switch (req->cmd->identify.cns) {
>> case NVME_ID_CNS_NS:
>> return nvmet_execute_identify_ns(req);
>> + case NVME_ID_CNS_CS_NS:
>> + if (req->cmd->identify.csi == NVME_CSI_ZNS)
>> + return nvmet_execute_identify_cns_cs_ns(req);
>> + break;
>> case NVME_ID_CNS_CTRL:
>> return nvmet_execute_identify_ctrl(req);
>> + case NVME_ID_CNS_CS_CTRL:
>> + if (req->cmd->identify.csi == NVME_CSI_ZNS)
>> + return nvmet_execute_identify_cns_cs_ctrl(req);
>> + break;
> I still find the way how this mixes the CNS and CSI fields in a single
> switch weird. I'd expect an outer switch for CNS and an inner one for
> the CSI value.
Well it just a sanitary check for csi, I think identify.csi checks
should be moved to nvmet_execute_identify_cns_cs_ns() and
nvmet_execute_identify_cns_cs_ctrl()to keep the code simple here.
I'll move that to respective callers, if you are not okay with that
please let me know.
>> +static u16 nvmet_bdev_zns_checks(struct nvmet_req *req)
>> +{
>> + if (!bdev_is_zoned(req->ns->bdev))
>> + return NVME_SC_INVALID_NS | NVME_SC_DNR;
>> +
>> + if (req->cmd->zmr.zra != NVME_ZRA_ZONE_REPORT)
>> + return NVME_SC_INVALID_FIELD;
>> +
>> + if (req->cmd->zmr.zrasf != NVME_ZRASF_ZONE_REPORT_ALL)
>> + return NVME_SC_INVALID_FIELD;
>> +
>> + if (req->cmd->zmr.pr != NVME_REPORT_ZONE_PARTIAL)
>> + return NVME_SC_INVALID_FIELD;
> error_loc information would be useful here.
Okay.
>> +static inline bool nvmet_zns_update_zasl(struct nvmet_ns *ns)
>> +{
>> + struct request_queue *q = ns->bdev->bd_disk->queue;
>> + u8 zasl = nvmet_zasl(queue_max_zone_append_sectors(q));
>> +
>> + if (ns->subsys->zasl)
>> + return ns->subsys->zasl < zasl ? false : true;
> No need for the tenary expression here, the comparism will autoconvert to
> bool.
>
Okay.
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply [flat|nested] 25+ messages in thread
* [PATCH V10 5/8] nvmet: add nvmet_req_bio put helper for backends
2021-03-09 4:58 [PATCH V10 0/8] nvmet: add ZBD backend support Chaitanya Kulkarni
` (3 preceding siblings ...)
2021-03-09 4:58 ` [PATCH V10 4/8] nvmet: add ZBD over ZNS backend support Chaitanya Kulkarni
@ 2021-03-09 4:58 ` Chaitanya Kulkarni
2021-03-09 4:58 ` [PATCH V10 6/8] nvme-core: check ctrl css before setting up zns Chaitanya Kulkarni
` (2 subsequent siblings)
7 siblings, 0 replies; 25+ messages in thread
From: Chaitanya Kulkarni @ 2021-03-09 4:58 UTC (permalink / raw)
To: linux-nvme; +Cc: hch, kbusch, sagi, damien.lemoal, Chaitanya Kulkarni
With the addition of zns backend now we have three different backends
with inline bio optimization. That leads to having duplicate code in
for freeing the bio in all three backends: generic bdev, passsthru and
generic zns.
Add a helper function to avoid the duplicate code and update the
respective backends.
Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
---
drivers/nvme/target/io-cmd-bdev.c | 3 +--
drivers/nvme/target/nvmet.h | 6 ++++++
drivers/nvme/target/passthru.c | 3 +--
drivers/nvme/target/zns.c | 3 +--
4 files changed, 9 insertions(+), 6 deletions(-)
diff --git a/drivers/nvme/target/io-cmd-bdev.c b/drivers/nvme/target/io-cmd-bdev.c
index 394090bf75a6..9a98b18a50d8 100644
--- a/drivers/nvme/target/io-cmd-bdev.c
+++ b/drivers/nvme/target/io-cmd-bdev.c
@@ -172,8 +172,7 @@ static void nvmet_bio_done(struct bio *bio)
struct nvmet_req *req = bio->bi_private;
nvmet_req_complete(req, blk_to_nvme_status(req, bio->bi_status));
- if (bio != &req->b.inline_bio)
- bio_put(bio);
+ nvmet_req_bio_put(req, bio);
}
#ifdef CONFIG_BLK_DEV_INTEGRITY
diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h
index cf592035e569..ba7c609b9427 100644
--- a/drivers/nvme/target/nvmet.h
+++ b/drivers/nvme/target/nvmet.h
@@ -653,4 +653,10 @@ static inline sector_t nvmet_lba_to_sect(struct nvmet_ns *ns, __le64 lba)
return le64_to_cpu(lba) << (ns->blksize_shift - SECTOR_SHIFT);
}
+static inline void nvmet_req_bio_put(struct nvmet_req *req, struct bio *bio)
+{
+ if (bio != &req->b.inline_bio)
+ bio_put(bio);
+}
+
#endif /* _NVMET_H */
diff --git a/drivers/nvme/target/passthru.c b/drivers/nvme/target/passthru.c
index 26c587ccd152..011aeebace55 100644
--- a/drivers/nvme/target/passthru.c
+++ b/drivers/nvme/target/passthru.c
@@ -206,8 +206,7 @@ static int nvmet_passthru_map_sg(struct nvmet_req *req, struct request *rq)
for_each_sg(req->sg, sg, req->sg_cnt, i) {
if (bio_add_pc_page(rq->q, bio, sg_page(sg), sg->length,
sg->offset) < sg->length) {
- if (bio != &req->p.inline_bio)
- bio_put(bio);
+ nvmet_req_bio_put(req, bio);
return -EINVAL;
}
}
diff --git a/drivers/nvme/target/zns.c b/drivers/nvme/target/zns.c
index 96bb1b041105..cb480560ebb3 100644
--- a/drivers/nvme/target/zns.c
+++ b/drivers/nvme/target/zns.c
@@ -302,7 +302,6 @@ void nvmet_bdev_execute_zone_append(struct nvmet_req *req)
bio->bi_iter.bi_sector);
out_bio_put:
- if (bio != &req->b.inline_bio)
- bio_put(bio);
+ nvmet_req_bio_put(req, bio);
nvmet_req_complete(req, ret < 0 ? NVME_SC_INTERNAL : status);
}
--
2.22.1
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH V10 6/8] nvme-core: check ctrl css before setting up zns
2021-03-09 4:58 [PATCH V10 0/8] nvmet: add ZBD backend support Chaitanya Kulkarni
` (4 preceding siblings ...)
2021-03-09 4:58 ` [PATCH V10 5/8] nvmet: add nvmet_req_bio put helper for backends Chaitanya Kulkarni
@ 2021-03-09 4:58 ` Chaitanya Kulkarni
2021-03-09 11:42 ` Christoph Hellwig
2021-03-09 4:58 ` [PATCH V10 7/8] nvme-core: add a helper to print css related error Chaitanya Kulkarni
2021-03-09 4:58 ` [PATCH V10 8/8] nvme: add comments to nvme_zns_alloc_report_buffer Chaitanya Kulkarni
7 siblings, 1 reply; 25+ messages in thread
From: Chaitanya Kulkarni @ 2021-03-09 4:58 UTC (permalink / raw)
To: linux-nvme; +Cc: hch, kbusch, sagi, damien.lemoal, Chaitanya Kulkarni
In the nvme-core when setting up the ZNS in nvme_update_ns_info()
we currently don't check if ctrl supports the multi css. This can
lead to buggy controllers not having right fields set for the
multiple command sets.
Add a check by calling nvme_multi_css() in the nvme_update_ns_info()
to make sure controller support the multiple command sets.
Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
---
drivers/nvme/host/core.c | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index e68a8c4ac5a6..3a372db00b9f 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -2217,6 +2217,13 @@ static int nvme_update_ns_info(struct nvme_ns *ns, struct nvme_id_ns *id)
nvme_update_disk_info(ns->disk, ns, id);
if (ns->head->ids.csi == NVME_CSI_ZNS) {
+ if (!nvme_multi_css(ns->ctrl)) {
+ dev_warn(ns->ctrl->device,
+ "Command set not reported for nsid:%d\n",
+ ns->head->ns_id);
+ ret = -EINVAL;
+ goto out_unfreeze;
+ }
ret = nvme_update_zone_info(ns, lbaf);
if (ret)
goto out_unfreeze;
--
2.22.1
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply related [flat|nested] 25+ messages in thread
* Re: [PATCH V10 6/8] nvme-core: check ctrl css before setting up zns
2021-03-09 4:58 ` [PATCH V10 6/8] nvme-core: check ctrl css before setting up zns Chaitanya Kulkarni
@ 2021-03-09 11:42 ` Christoph Hellwig
2021-03-09 15:03 ` Christoph Hellwig
0 siblings, 1 reply; 25+ messages in thread
From: Christoph Hellwig @ 2021-03-09 11:42 UTC (permalink / raw)
To: Chaitanya Kulkarni; +Cc: linux-nvme, hch, kbusch, sagi, damien.lemoal
I'll pick this up for nvme-5.12 as it is an important bug fix that
might allow not spec conformant devices to slip through.
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH V10 6/8] nvme-core: check ctrl css before setting up zns
2021-03-09 11:42 ` Christoph Hellwig
@ 2021-03-09 15:03 ` Christoph Hellwig
2021-03-10 1:14 ` Chaitanya Kulkarni
0 siblings, 1 reply; 25+ messages in thread
From: Christoph Hellwig @ 2021-03-09 15:03 UTC (permalink / raw)
To: Chaitanya Kulkarni; +Cc: linux-nvme, hch, kbusch, sagi, damien.lemoal
On Tue, Mar 09, 2021 at 12:42:04PM +0100, Christoph Hellwig wrote:
> I'll pick this up for nvme-5.12 as it is an important bug fix that
> might allow not spec conformant devices to slip through.
So while looking into applying this I think we should move the check
earlier. What do you think of this version?
---
From 43b82e21c6ae2f7d2d99550cb348fbd08610fa06 Mon Sep 17 00:00:00 2001
From: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
Date: Mon, 8 Mar 2021 20:58:21 -0800
Subject: nvme-core: check ctrl css before setting up zns
Ensure multiple Command Sets are supported before starting to setup a
ZNS namespace.
Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
[hch: move the check around a bit]
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
drivers/nvme/host/core.c | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 63bb1da0861e9d..82ad5eef9d0c30 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -4096,6 +4096,12 @@ static void nvme_validate_or_alloc_ns(struct nvme_ctrl *ctrl, unsigned nsid)
nsid);
break;
}
+ if (!nvme_multi_css(ctrl)) {
+ dev_warn(ctrl->device,
+ "command set not reported for nsid: %d\n",
+ ns->head->ns_id);
+ break;
+ }
nvme_alloc_ns(ctrl, nsid, &ids);
break;
default:
--
2.30.1
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply related [flat|nested] 25+ messages in thread
* Re: [PATCH V10 6/8] nvme-core: check ctrl css before setting up zns
2021-03-09 15:03 ` Christoph Hellwig
@ 2021-03-10 1:14 ` Chaitanya Kulkarni
0 siblings, 0 replies; 25+ messages in thread
From: Chaitanya Kulkarni @ 2021-03-10 1:14 UTC (permalink / raw)
To: Christoph Hellwig; +Cc: linux-nvme, kbusch, sagi, Damien Le Moal
On 3/9/21 07:03, Christoph Hellwig wrote:
> On Tue, Mar 09, 2021 at 12:42:04PM +0100, Christoph Hellwig wrote:
>> I'll pick this up for nvme-5.12 as it is an important bug fix that
>> might allow not spec conformant devices to slip through.
> So while looking into applying this I think we should move the check
> earlier. What do you think of this version?
>
> ---
> From 43b82e21c6ae2f7d2d99550cb348fbd08610fa06 Mon Sep 17 00:00:00 2001
> From: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
> Date: Mon, 8 Mar 2021 20:58:21 -0800
> Subject: nvme-core: check ctrl css before setting up zns
>
> Ensure multiple Command Sets are supported before starting to setup a
> ZNS namespace.
>
> Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
> [hch: move the check around a bit]
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
> drivers/nvme/host/core.c | 6 ++++++
> 1 file changed, 6 insertions(+)
>
> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> index 63bb1da0861e9d..82ad5eef9d0c30 100644
> --- a/drivers/nvme/host/core.c
> +++ b/drivers/nvme/host/core.c
> @@ -4096,6 +4096,12 @@ static void nvme_validate_or_alloc_ns(struct nvme_ctrl *ctrl, unsigned nsid)
> nsid);
> break;
> }
> + if (!nvme_multi_css(ctrl)) {
> + dev_warn(ctrl->device,
> + "command set not reported for nsid: %d\n",
> + ns->head->ns_id);
> + break;
> + }
> nvme_alloc_ns(ctrl, nsid, &ids);
> break;
> default:
This looks good.
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply [flat|nested] 25+ messages in thread
* [PATCH V10 7/8] nvme-core: add a helper to print css related error
2021-03-09 4:58 [PATCH V10 0/8] nvmet: add ZBD backend support Chaitanya Kulkarni
` (5 preceding siblings ...)
2021-03-09 4:58 ` [PATCH V10 6/8] nvme-core: check ctrl css before setting up zns Chaitanya Kulkarni
@ 2021-03-09 4:58 ` Chaitanya Kulkarni
2021-03-09 4:58 ` [PATCH V10 8/8] nvme: add comments to nvme_zns_alloc_report_buffer Chaitanya Kulkarni
7 siblings, 0 replies; 25+ messages in thread
From: Chaitanya Kulkarni @ 2021-03-09 4:58 UTC (permalink / raw)
To: linux-nvme; +Cc: hch, kbusch, sagi, damien.lemoal, Chaitanya Kulkarni
Right now there are two functions which are printing same error if
multi command sets is not supported by the namespace in the question.
Instead of repeating the code for error handling (dev_warn() + err code,
add a helper.
Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
---
drivers/nvme/host/core.c | 18 +++++++++---------
1 file changed, 9 insertions(+), 9 deletions(-)
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 3a372db00b9f..93421bb531b6 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -93,6 +93,12 @@ static void nvme_put_subsystem(struct nvme_subsystem *subsys);
static void nvme_remove_invalid_namespaces(struct nvme_ctrl *ctrl,
unsigned nsid);
+static int nvme_ns_css_print_err(struct nvme_ctrl *ctrl, unsigned int nsid)
+{
+ dev_warn(ctrl->device, "Command set not reported for nsid:%u\n", nsid);
+ return -EINVAL;
+}
+
/*
* Prepare a queue for teardown.
*
@@ -1408,11 +1414,8 @@ static int nvme_identify_ns_descs(struct nvme_ctrl *ctrl, unsigned nsid,
len += sizeof(*cur);
}
- if (nvme_multi_css(ctrl) && !csi_seen) {
- dev_warn(ctrl->device, "Command set not reported for nsid:%d\n",
- nsid);
- status = -EINVAL;
- }
+ if (nvme_multi_css(ctrl) && !csi_seen)
+ status = nvme_ns_css_print_err(ctrl, nsid);
free_data:
kfree(data);
@@ -2218,10 +2221,7 @@ static int nvme_update_ns_info(struct nvme_ns *ns, struct nvme_id_ns *id)
if (ns->head->ids.csi == NVME_CSI_ZNS) {
if (!nvme_multi_css(ns->ctrl)) {
- dev_warn(ns->ctrl->device,
- "Command set not reported for nsid:%d\n",
- ns->head->ns_id);
- ret = -EINVAL;
+ ret = nvme_ns_css_print_err(ns->ctrl, ns->head->ns_id);
goto out_unfreeze;
}
ret = nvme_update_zone_info(ns, lbaf);
--
2.22.1
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH V10 8/8] nvme: add comments to nvme_zns_alloc_report_buffer
2021-03-09 4:58 [PATCH V10 0/8] nvmet: add ZBD backend support Chaitanya Kulkarni
` (6 preceding siblings ...)
2021-03-09 4:58 ` [PATCH V10 7/8] nvme-core: add a helper to print css related error Chaitanya Kulkarni
@ 2021-03-09 4:58 ` Chaitanya Kulkarni
7 siblings, 0 replies; 25+ messages in thread
From: Chaitanya Kulkarni @ 2021-03-09 4:58 UTC (permalink / raw)
To: linux-nvme; +Cc: hch, kbusch, sagi, damien.lemoal, Chaitanya Kulkarni
The report zone buffer calculation is dependent nvme report zones
header, nvme report zone descriptor and on the various block
layer request queue attributes such as queue_max_hw_sectors(),
queue_max_segments(). These queue_XXX attributes are calculated on
different ctrl values in the nvme-core.
Add clear comments about what values we are using and how they are
calculated based on the controller's attributes.
This is needed since when referencing the code after long time it is not
straight forward to understand how we calculate the buffer size given
that there are variables and ctrl attributes involved.
Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
---
drivers/nvme/host/zns.c | 22 ++++++++++++++++++++++
1 file changed, 22 insertions(+)
diff --git a/drivers/nvme/host/zns.c b/drivers/nvme/host/zns.c
index c7e3ec561ba0..03778e0e368b 100644
--- a/drivers/nvme/host/zns.c
+++ b/drivers/nvme/host/zns.c
@@ -120,16 +120,38 @@ static void *nvme_zns_alloc_report_buffer(struct nvme_ns *ns,
size_t bufsize;
void *buf;
+ /*
+ * Set the minimum buffer size for report zone header and one zone
+ * descriptor.
+ */
const size_t min_bufsize = sizeof(struct nvme_zone_report) +
sizeof(struct nvme_zone_descriptor);
+ /*
+ * Recalculate the number of zones based on disk size of zone size.
+ */
nr_zones = min_t(unsigned int, nr_zones,
get_capacity(ns->disk) >> ilog2(ns->zsze));
+ /*
+ * Calculate the buffer size based on the report zone header and number
+ * of zone descriptors are required for each zone.
+ */
bufsize = sizeof(struct nvme_zone_report) +
nr_zones * sizeof(struct nvme_zone_descriptor);
+
+ /*
+ * Recalculate and Limit the buffer size to queue max hw sectors. For
+ * NVMe queue max hw sectors are calcualted based on controller's
+ * Maximum Data Transfer Size (MDTS).
+ */
bufsize = min_t(size_t, bufsize,
queue_max_hw_sectors(q) << SECTOR_SHIFT);
+ /*
+ * Recalculate and Limit the buffer size to queue max segments. For
+ * NVMe queue max segments are calculated based on how many controller
+ * pages are needed to fit the max hw sectors.
+ */
bufsize = min_t(size_t, bufsize, queue_max_segments(q) << PAGE_SHIFT);
while (bufsize >= min_bufsize) {
--
2.22.1
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply related [flat|nested] 25+ messages in thread